I1219 10:47:13.330274 9 e2e.go:224] Starting e2e run "e9fbc807-224c-11ea-a3c6-0242ac110004" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576752432 - Will randomize all specs Will run 201 of 2164 specs Dec 19 10:47:13.573: INFO: >>> kubeConfig: /root/.kube/config Dec 19 10:47:13.577: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 19 10:47:13.598: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 19 10:47:13.662: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 19 10:47:13.662: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 19 10:47:13.662: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 19 10:47:13.673: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 19 10:47:13.673: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 19 10:47:13.673: INFO: e2e test version: v1.13.12 Dec 19 10:47:13.674: INFO: kube-apiserver version: v1.13.8 S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:47:13.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Dec 19 10:47:14.030: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 10:47:14.093: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-85nck" to be "success or failure" Dec 19 10:47:14.207: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 113.453976ms Dec 19 10:47:16.267: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173258111s Dec 19 10:47:18.291: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197790451s Dec 19 10:47:20.554: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460572791s Dec 19 10:47:22.585: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491304192s Dec 19 10:47:24.621: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.527223706s Dec 19 10:47:26.661: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.568053351s STEP: Saw pod success Dec 19 10:47:26.662: INFO: Pod "downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 10:47:26.682: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 10:47:26.912: INFO: Waiting for pod downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004 to disappear Dec 19 10:47:26.925: INFO: Pod downwardapi-volume-eace029d-224c-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:47:26.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-85nck" for this suite. Dec 19 10:47:35.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 10:47:35.223: INFO: namespace: e2e-tests-downward-api-85nck, resource: bindings, ignored listing per whitelist Dec 19 10:47:35.350: INFO: namespace e2e-tests-downward-api-85nck deletion completed in 8.413524271s • [SLOW TEST:21.676 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:47:35.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 19 10:47:35.585: INFO: Waiting up to 5m0s for pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-7rcxj" to be "success or failure" Dec 19 10:47:35.616: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 30.681275ms Dec 19 10:47:38.250: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664921219s Dec 19 10:47:40.270: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.684351625s Dec 19 10:47:42.546: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.960636364s Dec 19 10:47:44.589: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.003024284s Dec 19 10:47:46.941: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.355595156s Dec 19 10:47:49.053: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.467329865s STEP: Saw pod success Dec 19 10:47:49.053: INFO: Pod "downward-api-f7a33728-224c-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 10:47:49.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f7a33728-224c-11ea-a3c6-0242ac110004 container dapi-container: STEP: delete the pod Dec 19 10:47:49.213: INFO: Waiting for pod downward-api-f7a33728-224c-11ea-a3c6-0242ac110004 to disappear Dec 19 10:47:49.221: INFO: Pod downward-api-f7a33728-224c-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:47:49.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7rcxj" for this suite. Dec 19 10:47:55.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 10:47:55.421: INFO: namespace: e2e-tests-downward-api-7rcxj, resource: bindings, ignored listing per whitelist Dec 19 10:47:55.448: INFO: namespace e2e-tests-downward-api-7rcxj deletion completed in 6.22114668s • [SLOW TEST:20.097 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:47:55.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-039b2888-224d-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume secrets Dec 19 10:47:55.675: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-cqmgm" to be "success or failure" Dec 19 10:47:55.695: INFO: Pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.447088ms Dec 19 10:47:57.710: INFO: Pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03510071s Dec 19 10:47:59.732: INFO: Pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056702155s Dec 19 10:48:01.918: INFO: Pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243123755s Dec 19 10:48:03.930: INFO: Pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254467485s Dec 19 10:48:05.971: INFO: Pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.295827574s STEP: Saw pod success Dec 19 10:48:05.971: INFO: Pod "pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 10:48:05.983: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 19 10:48:06.150: INFO: Waiting for pod pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004 to disappear Dec 19 10:48:06.235: INFO: Pod pod-projected-secrets-039cb18f-224d-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:48:06.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cqmgm" for this suite. Dec 19 10:48:12.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 10:48:12.397: INFO: namespace: e2e-tests-projected-cqmgm, resource: bindings, ignored listing per whitelist Dec 19 10:48:12.456: INFO: namespace e2e-tests-projected-cqmgm deletion completed in 6.208970135s • [SLOW TEST:17.008 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:48:12.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 19 10:48:12.715: INFO: Number of nodes with available pods: 0 Dec 19 10:48:12.715: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:14.006: INFO: Number of nodes with available pods: 0 Dec 19 10:48:14.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:14.773: INFO: Number of nodes with available pods: 0 Dec 19 10:48:14.773: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:15.743: INFO: Number of nodes with available pods: 0 Dec 19 10:48:15.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:16.734: INFO: Number of nodes with available pods: 0 Dec 19 10:48:16.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:18.710: INFO: Number of nodes with available pods: 0 Dec 19 10:48:18.710: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:18.915: INFO: Number of nodes with available pods: 0 Dec 19 10:48:18.915: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:19.979: INFO: Number of nodes with available pods: 0 Dec 19 10:48:19.979: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:20.804: INFO: Number of nodes with available pods: 0 Dec 19 10:48:20.804: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:21.870: INFO: Number of nodes with available pods: 0 Dec 19 10:48:21.870: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 10:48:22.734: INFO: Number of nodes with available pods: 1 Dec 19 10:48:22.734: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 19 10:48:22.927: INFO: Number of nodes with available pods: 1 Dec 19 10:48:22.927: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-7wsb9, will wait for the garbage collector to delete the pods Dec 19 10:48:25.327: INFO: Deleting DaemonSet.extensions daemon-set took: 138.166467ms Dec 19 10:48:26.728: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.400596886s Dec 19 10:48:32.055: INFO: Number of nodes with available pods: 0 Dec 19 10:48:32.055: INFO: Number of running nodes: 0, number of available pods: 0 Dec 19 10:48:32.099: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7wsb9/daemonsets","resourceVersion":"15332577"},"items":null} Dec 19 10:48:32.109: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7wsb9/pods","resourceVersion":"15332577"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:48:32.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-7wsb9" for this suite. Dec 19 10:48:38.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 10:48:38.349: INFO: namespace: e2e-tests-daemonsets-7wsb9, resource: bindings, ignored listing per whitelist Dec 19 10:48:38.354: INFO: namespace e2e-tests-daemonsets-7wsb9 deletion completed in 6.199399594s • [SLOW TEST:25.897 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:48:38.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 19 10:48:38.788: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332606,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 19 10:48:38.789: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332606,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 19 10:48:48.813: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332620,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 19 10:48:48.814: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332620,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 19 10:48:58.843: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332632,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 19 10:48:58.843: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332632,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 19 10:49:08.874: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332645,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 19 10:49:08.874: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-a,UID:1d4ed5a5-224d-11ea-a994-fa163e34d433,ResourceVersion:15332645,Generation:0,CreationTimestamp:2019-12-19 10:48:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 19 10:49:18.910: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-b,UID:353738be-224d-11ea-a994-fa163e34d433,ResourceVersion:15332658,Generation:0,CreationTimestamp:2019-12-19 10:49:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 19 10:49:18.910: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-b,UID:353738be-224d-11ea-a994-fa163e34d433,ResourceVersion:15332658,Generation:0,CreationTimestamp:2019-12-19 10:49:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 19 10:49:28.932: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-b,UID:353738be-224d-11ea-a994-fa163e34d433,ResourceVersion:15332671,Generation:0,CreationTimestamp:2019-12-19 10:49:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 19 10:49:28.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-mqm79,SelfLink:/api/v1/namespaces/e2e-tests-watch-mqm79/configmaps/e2e-watch-test-configmap-b,UID:353738be-224d-11ea-a994-fa163e34d433,ResourceVersion:15332671,Generation:0,CreationTimestamp:2019-12-19 10:49:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:49:38.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-mqm79" for this suite. Dec 19 10:49:45.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 10:49:45.067: INFO: namespace: e2e-tests-watch-mqm79, resource: bindings, ignored listing per whitelist Dec 19 10:49:45.164: INFO: namespace e2e-tests-watch-mqm79 deletion completed in 6.208261651s • [SLOW TEST:66.810 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:49:45.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 19 10:49:45.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-qv984' Dec 19 10:49:47.892: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 19 10:49:47.892: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Dec 19 10:49:48.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-qv984' Dec 19 10:49:48.352: INFO: stderr: "" Dec 19 10:49:48.352: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:49:48.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qv984" for this suite. Dec 19 10:50:12.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 10:50:12.778: INFO: namespace: e2e-tests-kubectl-qv984, resource: bindings, ignored listing per whitelist Dec 19 10:50:12.807: INFO: namespace e2e-tests-kubectl-qv984 deletion completed in 24.433939268s • [SLOW TEST:27.643 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:50:12.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 19 10:50:14.160: INFO: Pod name wrapped-volume-race-56204b35-224d-11ea-a3c6-0242ac110004: Found 0 pods out of 5 Dec 19 10:50:19.179: INFO: Pod name wrapped-volume-race-56204b35-224d-11ea-a3c6-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-56204b35-224d-11ea-a3c6-0242ac110004 in namespace e2e-tests-emptydir-wrapper-2bhrc, will wait for the garbage collector to delete the pods Dec 19 10:52:25.438: INFO: Deleting ReplicationController wrapped-volume-race-56204b35-224d-11ea-a3c6-0242ac110004 took: 36.68554ms Dec 19 10:52:25.739: INFO: Terminating ReplicationController wrapped-volume-race-56204b35-224d-11ea-a3c6-0242ac110004 pods took: 301.052222ms STEP: Creating RC which spawns configmap-volume pods Dec 19 10:53:08.976: INFO: Pod name wrapped-volume-race-be4768b5-224d-11ea-a3c6-0242ac110004: Found 0 pods out of 5 Dec 19 10:53:14.011: INFO: Pod name wrapped-volume-race-be4768b5-224d-11ea-a3c6-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-be4768b5-224d-11ea-a3c6-0242ac110004 in namespace e2e-tests-emptydir-wrapper-2bhrc, will wait for the garbage collector to delete the pods Dec 19 10:55:30.321: INFO: Deleting ReplicationController wrapped-volume-race-be4768b5-224d-11ea-a3c6-0242ac110004 took: 46.987745ms Dec 19 10:55:30.921: INFO: Terminating ReplicationController wrapped-volume-race-be4768b5-224d-11ea-a3c6-0242ac110004 pods took: 600.642211ms STEP: Creating RC which spawns configmap-volume pods Dec 19 10:56:23.806: INFO: Pod name wrapped-volume-race-3260e206-224e-11ea-a3c6-0242ac110004: Found 0 pods out of 5 Dec 19 10:56:28.831: INFO: Pod name wrapped-volume-race-3260e206-224e-11ea-a3c6-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3260e206-224e-11ea-a3c6-0242ac110004 in namespace e2e-tests-emptydir-wrapper-2bhrc, will wait for the garbage collector to delete the pods Dec 19 10:58:13.011: INFO: Deleting ReplicationController wrapped-volume-race-3260e206-224e-11ea-a3c6-0242ac110004 took: 24.356712ms Dec 19 10:58:13.511: INFO: Terminating ReplicationController wrapped-volume-race-3260e206-224e-11ea-a3c6-0242ac110004 pods took: 500.64456ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:59:05.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-2bhrc" for this suite. Dec 19 10:59:15.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 10:59:15.233: INFO: namespace: e2e-tests-emptydir-wrapper-2bhrc, resource: bindings, ignored listing per whitelist Dec 19 10:59:15.324: INFO: namespace e2e-tests-emptydir-wrapper-2bhrc deletion completed in 10.266120286s • [SLOW TEST:542.516 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 10:59:15.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 19 10:59:32.644: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 10:59:34.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-gj7nc" for this suite. Dec 19 11:00:01.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:00:01.114: INFO: namespace: e2e-tests-replicaset-gj7nc, resource: bindings, ignored listing per whitelist Dec 19 11:00:01.165: INFO: namespace e2e-tests-replicaset-gj7nc deletion completed in 26.612435636s • [SLOW TEST:45.841 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:00:01.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-b429c246-224e-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:00:01.453: INFO: Waiting up to 5m0s for pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-bfb98" to be "success or failure" Dec 19 11:00:01.465: INFO: Pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.418091ms Dec 19 11:00:03.475: INFO: Pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022195128s Dec 19 11:00:05.486: INFO: Pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033633737s Dec 19 11:00:07.577: INFO: Pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123876308s Dec 19 11:00:09.588: INFO: Pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135175075s Dec 19 11:00:11.602: INFO: Pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149521739s STEP: Saw pod success Dec 19 11:00:11.602: INFO: Pod "pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:00:11.610: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 19 11:00:11.718: INFO: Waiting for pod pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004 to disappear Dec 19 11:00:12.708: INFO: Pod pod-configmaps-b42d9f0a-224e-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:00:12.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bfb98" for this suite. Dec 19 11:00:18.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:00:19.115: INFO: namespace: e2e-tests-configmap-bfb98, resource: bindings, ignored listing per whitelist Dec 19 11:00:19.183: INFO: namespace e2e-tests-configmap-bfb98 deletion completed in 6.394935928s • [SLOW TEST:18.018 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:00:19.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-bee295d4-224e-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:00:19.372: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-zptxk" to be "success or failure" Dec 19 11:00:19.387: INFO: Pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.057409ms Dec 19 11:00:21.729: INFO: Pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35642155s Dec 19 11:00:23.770: INFO: Pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397915415s Dec 19 11:00:25.871: INFO: Pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498415353s Dec 19 11:00:27.914: INFO: Pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541423364s Dec 19 11:00:29.958: INFO: Pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.586011485s STEP: Saw pod success Dec 19 11:00:29.958: INFO: Pod "pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:00:29.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 19 11:00:30.179: INFO: Waiting for pod pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004 to disappear Dec 19 11:00:30.230: INFO: Pod pod-projected-configmaps-bee3ebe3-224e-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:00:30.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zptxk" for this suite. Dec 19 11:00:36.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:00:36.405: INFO: namespace: e2e-tests-projected-zptxk, resource: bindings, ignored listing per whitelist Dec 19 11:00:36.584: INFO: namespace e2e-tests-projected-zptxk deletion completed in 6.333606704s • [SLOW TEST:17.401 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:00:36.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:00:37.007: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:00:47.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ndqrk" for this suite. Dec 19 11:01:35.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:01:35.315: INFO: namespace: e2e-tests-pods-ndqrk, resource: bindings, ignored listing per whitelist Dec 19 11:01:35.371: INFO: namespace e2e-tests-pods-ndqrk deletion completed in 48.213933955s • [SLOW TEST:58.787 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:01:35.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 19 11:01:35.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bc7cc' Dec 19 11:01:37.729: INFO: stderr: "" Dec 19 11:01:37.729: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 19 11:01:47.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bc7cc -o json' Dec 19 11:01:48.006: INFO: stderr: "" Dec 19 11:01:48.006: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-19T11:01:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-bc7cc\",\n \"resourceVersion\": \"15334114\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-bc7cc/pods/e2e-test-nginx-pod\",\n \"uid\": \"ed915ea0-224e-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-ftt5g\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-ftt5g\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-ftt5g\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-19T11:01:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-19T11:01:46Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-19T11:01:46Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-19T11:01:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://f684855844d4522afd6aa9cda97797aa519844e9eef9021c66344131dea051db\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-19T11:01:45Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-19T11:01:37Z\"\n }\n}\n" STEP: replace the image in the pod Dec 19 11:01:48.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-bc7cc' Dec 19 11:01:48.679: INFO: stderr: "" Dec 19 11:01:48.679: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Dec 19 11:01:48.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bc7cc' Dec 19 11:01:57.988: INFO: stderr: "" Dec 19 11:01:57.989: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:01:57.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bc7cc" for this suite. Dec 19 11:02:06.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:02:06.301: INFO: namespace: e2e-tests-kubectl-bc7cc, resource: bindings, ignored listing per whitelist Dec 19 11:02:06.308: INFO: namespace e2e-tests-kubectl-bc7cc deletion completed in 8.305329021s • [SLOW TEST:30.937 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:02:06.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:02:06.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-lpdps" for this suite. Dec 19 11:02:12.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:02:12.856: INFO: namespace: e2e-tests-services-lpdps, resource: bindings, ignored listing per whitelist Dec 19 11:02:12.928: INFO: namespace e2e-tests-services-lpdps deletion completed in 6.293441468s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.619 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:02:12.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-02b9cb26-224f-11ea-a3c6-0242ac110004 STEP: Creating secret with name secret-projected-all-test-volume-02b9caf7-224f-11ea-a3c6-0242ac110004 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 19 11:02:13.300: INFO: Waiting up to 5m0s for pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-r524z" to be "success or failure" Dec 19 11:02:13.319: INFO: Pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.909989ms Dec 19 11:02:15.353: INFO: Pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053076761s Dec 19 11:02:17.381: INFO: Pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080702261s Dec 19 11:02:20.126: INFO: Pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.826323555s Dec 19 11:02:22.182: INFO: Pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.881749876s Dec 19 11:02:24.220: INFO: Pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.920278854s STEP: Saw pod success Dec 19 11:02:24.220: INFO: Pod "projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:02:24.330: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004 container projected-all-volume-test: STEP: delete the pod Dec 19 11:02:24.658: INFO: Waiting for pod projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004 to disappear Dec 19 11:02:24.669: INFO: Pod projected-volume-02b9ca4e-224f-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:02:24.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r524z" for this suite. Dec 19 11:02:30.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:02:30.768: INFO: namespace: e2e-tests-projected-r524z, resource: bindings, ignored listing per whitelist Dec 19 11:02:30.904: INFO: namespace e2e-tests-projected-r524z deletion completed in 6.228452134s • [SLOW TEST:17.976 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:02:30.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-2pqrt STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2pqrt to expose endpoints map[] Dec 19 11:02:31.284: INFO: Get endpoints failed (93.346159ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 19 11:02:32.305: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2pqrt exposes endpoints map[] (1.11486698s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-2pqrt STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2pqrt to expose endpoints map[pod1:[80]] Dec 19 11:02:36.675: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.347996707s elapsed, will retry) Dec 19 11:02:42.658: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2pqrt exposes endpoints map[pod1:[80]] (10.331023472s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-2pqrt STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2pqrt to expose endpoints map[pod1:[80] pod2:[80]] Dec 19 11:02:47.144: INFO: Unexpected endpoints: found map[0e2358f4-224f-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.467153457s elapsed, will retry) Dec 19 11:02:52.986: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2pqrt exposes endpoints map[pod1:[80] pod2:[80]] (10.309285564s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-2pqrt STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2pqrt to expose endpoints map[pod2:[80]] Dec 19 11:02:53.255: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2pqrt exposes endpoints map[pod2:[80]] (159.499508ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-2pqrt STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-2pqrt to expose endpoints map[] Dec 19 11:02:53.414: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-2pqrt exposes endpoints map[] (21.838518ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:02:53.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2pqrt" for this suite. Dec 19 11:03:17.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:03:17.742: INFO: namespace: e2e-tests-services-2pqrt, resource: bindings, ignored listing per whitelist Dec 19 11:03:17.757: INFO: namespace e2e-tests-services-2pqrt deletion completed in 24.195409387s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:46.853 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:03:17.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:03:17.967: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-cg9jl" to be "success or failure" Dec 19 11:03:18.005: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.018317ms Dec 19 11:03:20.146: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178039185s Dec 19 11:03:22.318: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35050515s Dec 19 11:03:24.337: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.369470735s Dec 19 11:03:26.470: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502750985s Dec 19 11:03:28.489: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.521838956s Dec 19 11:03:30.508: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.54010589s STEP: Saw pod success Dec 19 11:03:30.508: INFO: Pod "downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:03:30.517: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:03:30.689: INFO: Waiting for pod downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004 to disappear Dec 19 11:03:30.715: INFO: Pod downwardapi-volume-29503c0e-224f-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:03:30.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cg9jl" for this suite. Dec 19 11:03:36.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:03:37.073: INFO: namespace: e2e-tests-projected-cg9jl, resource: bindings, ignored listing per whitelist Dec 19 11:03:37.196: INFO: namespace e2e-tests-projected-cg9jl deletion completed in 6.474546875s • [SLOW TEST:19.438 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:03:37.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 19 11:03:37.520: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rb8gw,SelfLink:/api/v1/namespaces/e2e-tests-watch-rb8gw/configmaps/e2e-watch-test-resource-version,UID:34eb84c6-224f-11ea-a994-fa163e34d433,ResourceVersion:15334383,Generation:0,CreationTimestamp:2019-12-19 11:03:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 19 11:03:37.520: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rb8gw,SelfLink:/api/v1/namespaces/e2e-tests-watch-rb8gw/configmaps/e2e-watch-test-resource-version,UID:34eb84c6-224f-11ea-a994-fa163e34d433,ResourceVersion:15334384,Generation:0,CreationTimestamp:2019-12-19 11:03:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:03:37.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-rb8gw" for this suite. Dec 19 11:03:43.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:03:43.692: INFO: namespace: e2e-tests-watch-rb8gw, resource: bindings, ignored listing per whitelist Dec 19 11:03:43.833: INFO: namespace e2e-tests-watch-rb8gw deletion completed in 6.307805757s • [SLOW TEST:6.637 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:03:43.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:03:44.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-bqtr9" to be "success or failure" Dec 19 11:03:44.185: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428417ms Dec 19 11:03:46.358: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179585419s Dec 19 11:03:48.381: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20209904s Dec 19 11:03:50.737: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558481023s Dec 19 11:03:52.748: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.569414079s Dec 19 11:03:54.794: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61573626s Dec 19 11:03:56.814: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.635911958s STEP: Saw pod success Dec 19 11:03:56.815: INFO: Pod "downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:03:57.271: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:03:57.594: INFO: Waiting for pod downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004 to disappear Dec 19 11:03:57.602: INFO: Pod downwardapi-volume-38e48cad-224f-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:03:57.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bqtr9" for this suite. Dec 19 11:04:04.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:04:04.200: INFO: namespace: e2e-tests-projected-bqtr9, resource: bindings, ignored listing per whitelist Dec 19 11:04:04.205: INFO: namespace e2e-tests-projected-bqtr9 deletion completed in 6.596781686s • [SLOW TEST:20.372 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:04:04.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Dec 19 11:04:04.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-mvrl9 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 19 11:04:16.639: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 19 11:04:16.639: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:04:18.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mvrl9" for this suite. Dec 19 11:04:24.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:04:24.982: INFO: namespace: e2e-tests-kubectl-mvrl9, resource: bindings, ignored listing per whitelist Dec 19 11:04:25.045: INFO: namespace e2e-tests-kubectl-mvrl9 deletion completed in 6.163976702s • [SLOW TEST:20.839 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:04:25.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-517160b3-224f-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume secrets Dec 19 11:04:25.351: INFO: Waiting up to 5m0s for pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-x77x6" to be "success or failure" Dec 19 11:04:25.356: INFO: Pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.070475ms Dec 19 11:04:28.244: INFO: Pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892804945s Dec 19 11:04:30.289: INFO: Pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.937996447s Dec 19 11:04:32.871: INFO: Pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.520371651s Dec 19 11:04:35.086: INFO: Pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 9.735177041s Dec 19 11:04:37.108: INFO: Pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.756976612s STEP: Saw pod success Dec 19 11:04:37.108: INFO: Pod "pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:04:37.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 19 11:04:37.285: INFO: Waiting for pod pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004 to disappear Dec 19 11:04:37.307: INFO: Pod pod-secrets-5174b6d5-224f-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:04:37.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x77x6" for this suite. Dec 19 11:04:43.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:04:43.399: INFO: namespace: e2e-tests-secrets-x77x6, resource: bindings, ignored listing per whitelist Dec 19 11:04:43.567: INFO: namespace e2e-tests-secrets-x77x6 deletion completed in 6.244107254s • [SLOW TEST:18.521 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:04:43.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qvjf6 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 19 11:04:43.778: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 19 11:05:20.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-qvjf6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 19 11:05:20.212: INFO: >>> kubeConfig: /root/.kube/config Dec 19 11:05:20.899: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:05:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-qvjf6" for this suite. Dec 19 11:05:48.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:05:49.090: INFO: namespace: e2e-tests-pod-network-test-qvjf6, resource: bindings, ignored listing per whitelist Dec 19 11:05:49.105: INFO: namespace e2e-tests-pod-network-test-qvjf6 deletion completed in 28.184451923s • [SLOW TEST:65.538 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:05:49.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-838e2087-224f-11ea-a3c6-0242ac110004 STEP: Creating secret with name s-test-opt-upd-838e20dd-224f-11ea-a3c6-0242ac110004 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-838e2087-224f-11ea-a3c6-0242ac110004 STEP: Updating secret s-test-opt-upd-838e20dd-224f-11ea-a3c6-0242ac110004 STEP: Creating secret with name s-test-opt-create-838e20fb-224f-11ea-a3c6-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:06:07.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hvpld" for this suite. Dec 19 11:06:31.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:06:32.145: INFO: namespace: e2e-tests-projected-hvpld, resource: bindings, ignored listing per whitelist Dec 19 11:06:32.164: INFO: namespace e2e-tests-projected-hvpld deletion completed in 24.494056332s • [SLOW TEST:43.058 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:06:32.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 19 11:06:32.426: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hg82g,SelfLink:/api/v1/namespaces/e2e-tests-watch-hg82g/configmaps/e2e-watch-test-watch-closed,UID:9d3beca8-224f-11ea-a994-fa163e34d433,ResourceVersion:15334770,Generation:0,CreationTimestamp:2019-12-19 11:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 19 11:06:32.427: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hg82g,SelfLink:/api/v1/namespaces/e2e-tests-watch-hg82g/configmaps/e2e-watch-test-watch-closed,UID:9d3beca8-224f-11ea-a994-fa163e34d433,ResourceVersion:15334771,Generation:0,CreationTimestamp:2019-12-19 11:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 19 11:06:32.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hg82g,SelfLink:/api/v1/namespaces/e2e-tests-watch-hg82g/configmaps/e2e-watch-test-watch-closed,UID:9d3beca8-224f-11ea-a994-fa163e34d433,ResourceVersion:15334772,Generation:0,CreationTimestamp:2019-12-19 11:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 19 11:06:32.504: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hg82g,SelfLink:/api/v1/namespaces/e2e-tests-watch-hg82g/configmaps/e2e-watch-test-watch-closed,UID:9d3beca8-224f-11ea-a994-fa163e34d433,ResourceVersion:15334773,Generation:0,CreationTimestamp:2019-12-19 11:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:06:32.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-hg82g" for this suite. Dec 19 11:06:38.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:06:38.920: INFO: namespace: e2e-tests-watch-hg82g, resource: bindings, ignored listing per whitelist Dec 19 11:06:38.923: INFO: namespace e2e-tests-watch-hg82g deletion completed in 6.395918009s • [SLOW TEST:6.759 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:06:38.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 19 11:06:39.032: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:06:56.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rf2z8" for this suite. Dec 19 11:07:04.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:07:04.523: INFO: namespace: e2e-tests-init-container-rf2z8, resource: bindings, ignored listing per whitelist Dec 19 11:07:04.538: INFO: namespace e2e-tests-init-container-rf2z8 deletion completed in 8.278915223s • [SLOW TEST:25.615 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:07:04.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:07:15.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-46k7s" for this suite. Dec 19 11:07:21.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:07:21.366: INFO: namespace: e2e-tests-emptydir-wrapper-46k7s, resource: bindings, ignored listing per whitelist Dec 19 11:07:21.380: INFO: namespace e2e-tests-emptydir-wrapper-46k7s deletion completed in 6.260446782s • [SLOW TEST:16.842 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:07:21.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-rs29h I1219 11:07:21.592935 9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-rs29h, replica count: 1 I1219 11:07:22.644187 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:23.645466 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:24.645964 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:25.646422 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:26.647649 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:27.648214 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:28.649246 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:29.649902 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:30.650677 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:31.651148 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:32.651961 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1219 11:07:33.652765 9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 19 11:07:33.879: INFO: Created: latency-svc-zxt5h Dec 19 11:07:34.030: INFO: Got endpoints: latency-svc-zxt5h [277.376611ms] Dec 19 11:07:34.121: INFO: Created: latency-svc-z9lzf Dec 19 11:07:34.236: INFO: Created: latency-svc-txk5q Dec 19 11:07:34.252: INFO: Got endpoints: latency-svc-txk5q [221.086395ms] Dec 19 11:07:34.252: INFO: Got endpoints: latency-svc-z9lzf [220.172654ms] Dec 19 11:07:34.515: INFO: Created: latency-svc-x4d2r Dec 19 11:07:34.527: INFO: Got endpoints: latency-svc-x4d2r [495.242329ms] Dec 19 11:07:34.692: INFO: Created: latency-svc-75d6b Dec 19 11:07:34.702: INFO: Got endpoints: latency-svc-75d6b [669.508326ms] Dec 19 11:07:34.772: INFO: Created: latency-svc-79j45 Dec 19 11:07:34.916: INFO: Got endpoints: latency-svc-79j45 [883.660519ms] Dec 19 11:07:34.995: INFO: Created: latency-svc-vflnt Dec 19 11:07:35.143: INFO: Got endpoints: latency-svc-vflnt [1.111896189s] Dec 19 11:07:35.175: INFO: Created: latency-svc-x9ks2 Dec 19 11:07:35.215: INFO: Created: latency-svc-jsfnl Dec 19 11:07:35.217: INFO: Got endpoints: latency-svc-x9ks2 [1.186311372s] Dec 19 11:07:35.315: INFO: Got endpoints: latency-svc-jsfnl [1.282214745s] Dec 19 11:07:35.384: INFO: Created: latency-svc-shkrp Dec 19 11:07:35.389: INFO: Got endpoints: latency-svc-shkrp [1.357157759s] Dec 19 11:07:35.507: INFO: Created: latency-svc-r2m7v Dec 19 11:07:35.559: INFO: Got endpoints: latency-svc-r2m7v [1.526930036s] Dec 19 11:07:35.718: INFO: Created: latency-svc-q25n9 Dec 19 11:07:35.726: INFO: Got endpoints: latency-svc-q25n9 [1.69331354s] Dec 19 11:07:35.800: INFO: Created: latency-svc-8rxkw Dec 19 11:07:35.979: INFO: Got endpoints: latency-svc-8rxkw [1.946674218s] Dec 19 11:07:36.023: INFO: Created: latency-svc-hr5pw Dec 19 11:07:36.053: INFO: Got endpoints: latency-svc-hr5pw [2.022511545s] Dec 19 11:07:36.284: INFO: Created: latency-svc-829xv Dec 19 11:07:36.325: INFO: Got endpoints: latency-svc-829xv [2.292851475s] Dec 19 11:07:36.581: INFO: Created: latency-svc-rsz5m Dec 19 11:07:36.593: INFO: Got endpoints: latency-svc-rsz5m [2.560306127s] Dec 19 11:07:36.869: INFO: Created: latency-svc-hl8vk Dec 19 11:07:37.007: INFO: Got endpoints: latency-svc-hl8vk [2.754866369s] Dec 19 11:07:37.076: INFO: Created: latency-svc-frlpk Dec 19 11:07:37.085: INFO: Got endpoints: latency-svc-frlpk [2.832876494s] Dec 19 11:07:37.222: INFO: Created: latency-svc-6cwqh Dec 19 11:07:37.242: INFO: Got endpoints: latency-svc-6cwqh [2.715246447s] Dec 19 11:07:37.385: INFO: Created: latency-svc-xh4dj Dec 19 11:07:37.394: INFO: Got endpoints: latency-svc-xh4dj [2.692091604s] Dec 19 11:07:37.461: INFO: Created: latency-svc-wvhbb Dec 19 11:07:37.461: INFO: Got endpoints: latency-svc-wvhbb [2.545027712s] Dec 19 11:07:37.567: INFO: Created: latency-svc-xll8n Dec 19 11:07:37.589: INFO: Got endpoints: latency-svc-xll8n [2.445319996s] Dec 19 11:07:37.653: INFO: Created: latency-svc-ntrr4 Dec 19 11:07:37.655: INFO: Got endpoints: latency-svc-ntrr4 [2.43799413s] Dec 19 11:07:37.796: INFO: Created: latency-svc-txfsx Dec 19 11:07:37.815: INFO: Got endpoints: latency-svc-txfsx [2.500238346s] Dec 19 11:07:37.834: INFO: Created: latency-svc-52mql Dec 19 11:07:37.851: INFO: Got endpoints: latency-svc-52mql [2.461942966s] Dec 19 11:07:38.037: INFO: Created: latency-svc-glpp7 Dec 19 11:07:38.062: INFO: Got endpoints: latency-svc-glpp7 [2.502416611s] Dec 19 11:07:38.228: INFO: Created: latency-svc-b79xq Dec 19 11:07:38.263: INFO: Got endpoints: latency-svc-b79xq [2.537742337s] Dec 19 11:07:38.328: INFO: Created: latency-svc-q6x2r Dec 19 11:07:38.433: INFO: Got endpoints: latency-svc-q6x2r [2.453941294s] Dec 19 11:07:38.481: INFO: Created: latency-svc-hd9kl Dec 19 11:07:38.497: INFO: Got endpoints: latency-svc-hd9kl [2.44340791s] Dec 19 11:07:38.652: INFO: Created: latency-svc-pclfc Dec 19 11:07:38.717: INFO: Got endpoints: latency-svc-pclfc [2.391738701s] Dec 19 11:07:38.911: INFO: Created: latency-svc-gblmj Dec 19 11:07:38.946: INFO: Got endpoints: latency-svc-gblmj [2.353014329s] Dec 19 11:07:39.119: INFO: Created: latency-svc-2gm4m Dec 19 11:07:39.148: INFO: Got endpoints: latency-svc-2gm4m [2.140725244s] Dec 19 11:07:39.349: INFO: Created: latency-svc-s9q27 Dec 19 11:07:39.349: INFO: Got endpoints: latency-svc-s9q27 [2.264131867s] Dec 19 11:07:39.389: INFO: Created: latency-svc-dq7qm Dec 19 11:07:39.541: INFO: Got endpoints: latency-svc-dq7qm [2.298130848s] Dec 19 11:07:39.562: INFO: Created: latency-svc-mfbxh Dec 19 11:07:39.580: INFO: Got endpoints: latency-svc-mfbxh [2.186536446s] Dec 19 11:07:39.758: INFO: Created: latency-svc-6t7mm Dec 19 11:07:39.783: INFO: Got endpoints: latency-svc-6t7mm [242.297508ms] Dec 19 11:07:39.960: INFO: Created: latency-svc-v2phz Dec 19 11:07:40.029: INFO: Got endpoints: latency-svc-v2phz [2.568176207s] Dec 19 11:07:40.053: INFO: Created: latency-svc-dbbkf Dec 19 11:07:40.314: INFO: Got endpoints: latency-svc-dbbkf [2.724611407s] Dec 19 11:07:40.347: INFO: Created: latency-svc-8lspt Dec 19 11:07:40.373: INFO: Got endpoints: latency-svc-8lspt [2.717454885s] Dec 19 11:07:40.641: INFO: Created: latency-svc-96sng Dec 19 11:07:40.778: INFO: Got endpoints: latency-svc-96sng [2.963011093s] Dec 19 11:07:40.795: INFO: Created: latency-svc-rssfb Dec 19 11:07:40.809: INFO: Got endpoints: latency-svc-rssfb [2.957607457s] Dec 19 11:07:41.007: INFO: Created: latency-svc-xwlxn Dec 19 11:07:41.028: INFO: Got endpoints: latency-svc-xwlxn [2.96585403s] Dec 19 11:07:41.096: INFO: Created: latency-svc-blbfm Dec 19 11:07:41.207: INFO: Got endpoints: latency-svc-blbfm [2.943469338s] Dec 19 11:07:41.238: INFO: Created: latency-svc-f79n7 Dec 19 11:07:41.262: INFO: Got endpoints: latency-svc-f79n7 [2.829391076s] Dec 19 11:07:41.417: INFO: Created: latency-svc-zqzcx Dec 19 11:07:41.451: INFO: Got endpoints: latency-svc-zqzcx [2.953835578s] Dec 19 11:07:41.601: INFO: Created: latency-svc-85vdk Dec 19 11:07:41.653: INFO: Got endpoints: latency-svc-85vdk [2.935700217s] Dec 19 11:07:41.684: INFO: Created: latency-svc-t6f6g Dec 19 11:07:41.837: INFO: Got endpoints: latency-svc-t6f6g [2.890506771s] Dec 19 11:07:41.920: INFO: Created: latency-svc-9hn26 Dec 19 11:07:41.920: INFO: Got endpoints: latency-svc-9hn26 [2.771457449s] Dec 19 11:07:42.166: INFO: Created: latency-svc-m456q Dec 19 11:07:42.187: INFO: Got endpoints: latency-svc-m456q [2.838050401s] Dec 19 11:07:42.419: INFO: Created: latency-svc-97zbm Dec 19 11:07:42.435: INFO: Got endpoints: latency-svc-97zbm [2.854400678s] Dec 19 11:07:42.660: INFO: Created: latency-svc-gsnfd Dec 19 11:07:42.686: INFO: Got endpoints: latency-svc-gsnfd [2.902376892s] Dec 19 11:07:42.722: INFO: Created: latency-svc-gmt5x Dec 19 11:07:42.882: INFO: Got endpoints: latency-svc-gmt5x [2.852324215s] Dec 19 11:07:42.941: INFO: Created: latency-svc-xmch8 Dec 19 11:07:42.944: INFO: Got endpoints: latency-svc-xmch8 [2.630092294s] Dec 19 11:07:43.158: INFO: Created: latency-svc-dnsvr Dec 19 11:07:43.202: INFO: Got endpoints: latency-svc-dnsvr [2.828817903s] Dec 19 11:07:43.255: INFO: Created: latency-svc-kxwnz Dec 19 11:07:43.351: INFO: Got endpoints: latency-svc-kxwnz [2.571829584s] Dec 19 11:07:43.387: INFO: Created: latency-svc-bczvs Dec 19 11:07:43.409: INFO: Got endpoints: latency-svc-bczvs [2.600567093s] Dec 19 11:07:43.541: INFO: Created: latency-svc-4rq8q Dec 19 11:07:43.586: INFO: Got endpoints: latency-svc-4rq8q [2.557842943s] Dec 19 11:07:43.765: INFO: Created: latency-svc-scnh2 Dec 19 11:07:43.789: INFO: Got endpoints: latency-svc-scnh2 [2.581061774s] Dec 19 11:07:44.021: INFO: Created: latency-svc-fvv87 Dec 19 11:07:44.030: INFO: Got endpoints: latency-svc-fvv87 [2.767131847s] Dec 19 11:07:44.147: INFO: Created: latency-svc-wtxbf Dec 19 11:07:44.377: INFO: Created: latency-svc-vfc26 Dec 19 11:07:44.604: INFO: Got endpoints: latency-svc-vfc26 [2.950473364s] Dec 19 11:07:44.613: INFO: Got endpoints: latency-svc-wtxbf [3.161821299s] Dec 19 11:07:44.724: INFO: Created: latency-svc-mhfc2 Dec 19 11:07:44.766: INFO: Got endpoints: latency-svc-mhfc2 [2.929248163s] Dec 19 11:07:44.797: INFO: Created: latency-svc-84lrd Dec 19 11:07:44.922: INFO: Got endpoints: latency-svc-84lrd [3.002255295s] Dec 19 11:07:45.245: INFO: Created: latency-svc-w74dx Dec 19 11:07:45.452: INFO: Got endpoints: latency-svc-w74dx [3.264176792s] Dec 19 11:07:45.673: INFO: Created: latency-svc-k74lf Dec 19 11:07:45.699: INFO: Got endpoints: latency-svc-k74lf [3.263268827s] Dec 19 11:07:45.712: INFO: Created: latency-svc-hpst4 Dec 19 11:07:45.759: INFO: Got endpoints: latency-svc-hpst4 [3.073222015s] Dec 19 11:07:45.921: INFO: Created: latency-svc-dlwv4 Dec 19 11:07:45.938: INFO: Got endpoints: latency-svc-dlwv4 [3.055850887s] Dec 19 11:07:46.197: INFO: Created: latency-svc-jzg5f Dec 19 11:07:46.212: INFO: Got endpoints: latency-svc-jzg5f [3.268178071s] Dec 19 11:07:46.264: INFO: Created: latency-svc-ndxlv Dec 19 11:07:46.417: INFO: Got endpoints: latency-svc-ndxlv [3.215263749s] Dec 19 11:07:46.435: INFO: Created: latency-svc-k4fll Dec 19 11:07:46.479: INFO: Got endpoints: latency-svc-k4fll [3.128532458s] Dec 19 11:07:46.693: INFO: Created: latency-svc-2hw5d Dec 19 11:07:46.693: INFO: Got endpoints: latency-svc-2hw5d [3.283809986s] Dec 19 11:07:46.809: INFO: Created: latency-svc-jn488 Dec 19 11:07:46.834: INFO: Got endpoints: latency-svc-jn488 [3.247988472s] Dec 19 11:07:47.054: INFO: Created: latency-svc-5964c Dec 19 11:07:47.054: INFO: Got endpoints: latency-svc-5964c [3.265214059s] Dec 19 11:07:47.236: INFO: Created: latency-svc-5tzz9 Dec 19 11:07:47.242: INFO: Got endpoints: latency-svc-5tzz9 [3.211897649s] Dec 19 11:07:47.312: INFO: Created: latency-svc-wxjc6 Dec 19 11:07:47.398: INFO: Got endpoints: latency-svc-wxjc6 [2.794036173s] Dec 19 11:07:47.454: INFO: Created: latency-svc-jxrz8 Dec 19 11:07:47.471: INFO: Got endpoints: latency-svc-jxrz8 [2.857321619s] Dec 19 11:07:47.618: INFO: Created: latency-svc-xhfvr Dec 19 11:07:47.677: INFO: Got endpoints: latency-svc-xhfvr [2.909769031s] Dec 19 11:07:47.890: INFO: Created: latency-svc-9w9rt Dec 19 11:07:47.909: INFO: Got endpoints: latency-svc-9w9rt [2.98646719s] Dec 19 11:07:48.121: INFO: Created: latency-svc-zlkwq Dec 19 11:07:48.146: INFO: Got endpoints: latency-svc-zlkwq [2.693806152s] Dec 19 11:07:48.190: INFO: Created: latency-svc-s8wnz Dec 19 11:07:48.335: INFO: Got endpoints: latency-svc-s8wnz [2.635923515s] Dec 19 11:07:48.355: INFO: Created: latency-svc-qvhw2 Dec 19 11:07:48.401: INFO: Got endpoints: latency-svc-qvhw2 [2.64181306s] Dec 19 11:07:48.591: INFO: Created: latency-svc-9ztd4 Dec 19 11:07:48.594: INFO: Got endpoints: latency-svc-9ztd4 [2.655955902s] Dec 19 11:07:48.767: INFO: Created: latency-svc-r7jlp Dec 19 11:07:48.794: INFO: Got endpoints: latency-svc-r7jlp [2.581116293s] Dec 19 11:07:48.850: INFO: Created: latency-svc-x272g Dec 19 11:07:49.072: INFO: Got endpoints: latency-svc-x272g [2.654413184s] Dec 19 11:07:49.104: INFO: Created: latency-svc-gmr84 Dec 19 11:07:49.132: INFO: Got endpoints: latency-svc-gmr84 [2.652112367s] Dec 19 11:07:49.342: INFO: Created: latency-svc-qmhft Dec 19 11:07:49.350: INFO: Got endpoints: latency-svc-qmhft [2.656633044s] Dec 19 11:07:49.411: INFO: Created: latency-svc-mh7bd Dec 19 11:07:49.516: INFO: Got endpoints: latency-svc-mh7bd [2.681893917s] Dec 19 11:07:49.546: INFO: Created: latency-svc-zr448 Dec 19 11:07:49.564: INFO: Got endpoints: latency-svc-zr448 [2.509695499s] Dec 19 11:07:49.729: INFO: Created: latency-svc-rlrdd Dec 19 11:07:49.754: INFO: Got endpoints: latency-svc-rlrdd [2.511926784s] Dec 19 11:07:50.010: INFO: Created: latency-svc-84k2l Dec 19 11:07:50.024: INFO: Got endpoints: latency-svc-84k2l [2.625559041s] Dec 19 11:07:50.058: INFO: Created: latency-svc-wmbkj Dec 19 11:07:50.169: INFO: Got endpoints: latency-svc-wmbkj [2.698302825s] Dec 19 11:07:50.201: INFO: Created: latency-svc-sxbpn Dec 19 11:07:50.208: INFO: Got endpoints: latency-svc-sxbpn [2.530734974s] Dec 19 11:07:50.421: INFO: Created: latency-svc-5d5p9 Dec 19 11:07:50.469: INFO: Got endpoints: latency-svc-5d5p9 [2.560032838s] Dec 19 11:07:50.632: INFO: Created: latency-svc-mk9lk Dec 19 11:07:50.662: INFO: Got endpoints: latency-svc-mk9lk [2.516207774s] Dec 19 11:07:50.714: INFO: Created: latency-svc-8l8jf Dec 19 11:07:50.837: INFO: Got endpoints: latency-svc-8l8jf [2.502469821s] Dec 19 11:07:50.941: INFO: Created: latency-svc-9cd5s Dec 19 11:07:51.091: INFO: Got endpoints: latency-svc-9cd5s [2.690137855s] Dec 19 11:07:51.185: INFO: Created: latency-svc-xl7g6 Dec 19 11:07:51.353: INFO: Got endpoints: latency-svc-xl7g6 [2.759190709s] Dec 19 11:07:51.402: INFO: Created: latency-svc-tpvl2 Dec 19 11:07:51.421: INFO: Got endpoints: latency-svc-tpvl2 [2.627251452s] Dec 19 11:07:51.612: INFO: Created: latency-svc-9kdrc Dec 19 11:07:51.786: INFO: Got endpoints: latency-svc-9kdrc [2.713731521s] Dec 19 11:07:51.799: INFO: Created: latency-svc-pd25h Dec 19 11:07:51.822: INFO: Got endpoints: latency-svc-pd25h [2.69012362s] Dec 19 11:07:52.086: INFO: Created: latency-svc-s4dqm Dec 19 11:07:52.094: INFO: Got endpoints: latency-svc-s4dqm [2.744223209s] Dec 19 11:07:52.283: INFO: Created: latency-svc-nwhxm Dec 19 11:07:52.319: INFO: Got endpoints: latency-svc-nwhxm [2.803110105s] Dec 19 11:07:52.573: INFO: Created: latency-svc-wb4n2 Dec 19 11:07:52.573: INFO: Got endpoints: latency-svc-wb4n2 [3.008860734s] Dec 19 11:07:52.795: INFO: Created: latency-svc-q9p82 Dec 19 11:07:52.818: INFO: Got endpoints: latency-svc-q9p82 [3.064221486s] Dec 19 11:07:53.034: INFO: Created: latency-svc-9g2lb Dec 19 11:07:53.056: INFO: Got endpoints: latency-svc-9g2lb [3.032278453s] Dec 19 11:07:53.233: INFO: Created: latency-svc-fgjvb Dec 19 11:07:53.244: INFO: Got endpoints: latency-svc-fgjvb [3.074511758s] Dec 19 11:07:53.285: INFO: Created: latency-svc-nvs4q Dec 19 11:07:53.424: INFO: Got endpoints: latency-svc-nvs4q [3.216708299s] Dec 19 11:07:53.441: INFO: Created: latency-svc-x9ndz Dec 19 11:07:53.466: INFO: Got endpoints: latency-svc-x9ndz [2.996788391s] Dec 19 11:07:53.506: INFO: Created: latency-svc-ggqn2 Dec 19 11:07:53.638: INFO: Got endpoints: latency-svc-ggqn2 [2.975753041s] Dec 19 11:07:53.669: INFO: Created: latency-svc-j494k Dec 19 11:07:53.710: INFO: Got endpoints: latency-svc-j494k [2.872809289s] Dec 19 11:07:54.041: INFO: Created: latency-svc-jh6vj Dec 19 11:07:54.078: INFO: Got endpoints: latency-svc-jh6vj [2.986379338s] Dec 19 11:07:54.319: INFO: Created: latency-svc-lr5qr Dec 19 11:07:54.351: INFO: Got endpoints: latency-svc-lr5qr [2.99764305s] Dec 19 11:07:54.565: INFO: Created: latency-svc-5fvdv Dec 19 11:07:54.632: INFO: Got endpoints: latency-svc-5fvdv [3.21035663s] Dec 19 11:07:54.672: INFO: Created: latency-svc-6tpp6 Dec 19 11:07:54.928: INFO: Created: latency-svc-b6whf Dec 19 11:07:54.929: INFO: Got endpoints: latency-svc-6tpp6 [3.142798211s] Dec 19 11:07:55.169: INFO: Created: latency-svc-wcjnh Dec 19 11:07:55.187: INFO: Got endpoints: latency-svc-b6whf [3.364310261s] Dec 19 11:07:55.187: INFO: Got endpoints: latency-svc-wcjnh [3.092835369s] Dec 19 11:07:55.386: INFO: Created: latency-svc-629fb Dec 19 11:07:55.399: INFO: Got endpoints: latency-svc-629fb [3.078832183s] Dec 19 11:07:55.592: INFO: Created: latency-svc-kvw75 Dec 19 11:07:55.622: INFO: Got endpoints: latency-svc-kvw75 [3.048540837s] Dec 19 11:07:55.886: INFO: Created: latency-svc-jjbs5 Dec 19 11:07:55.909: INFO: Got endpoints: latency-svc-jjbs5 [3.090155633s] Dec 19 11:07:56.112: INFO: Created: latency-svc-8khn8 Dec 19 11:07:56.162: INFO: Got endpoints: latency-svc-8khn8 [3.105148329s] Dec 19 11:07:56.323: INFO: Created: latency-svc-svgs9 Dec 19 11:07:56.423: INFO: Created: latency-svc-htfw4 Dec 19 11:07:56.535: INFO: Got endpoints: latency-svc-htfw4 [3.110201988s] Dec 19 11:07:56.536: INFO: Got endpoints: latency-svc-svgs9 [3.292088419s] Dec 19 11:07:56.571: INFO: Created: latency-svc-7fzpn Dec 19 11:07:56.657: INFO: Got endpoints: latency-svc-7fzpn [3.19092976s] Dec 19 11:07:56.810: INFO: Created: latency-svc-47mmq Dec 19 11:07:56.835: INFO: Got endpoints: latency-svc-47mmq [3.196748628s] Dec 19 11:07:56.918: INFO: Created: latency-svc-9gjwx Dec 19 11:07:57.036: INFO: Got endpoints: latency-svc-9gjwx [3.325302655s] Dec 19 11:07:57.059: INFO: Created: latency-svc-v6zgp Dec 19 11:07:57.100: INFO: Got endpoints: latency-svc-v6zgp [3.02144548s] Dec 19 11:07:57.123: INFO: Created: latency-svc-pk2m4 Dec 19 11:07:57.259: INFO: Got endpoints: latency-svc-pk2m4 [2.90798734s] Dec 19 11:07:57.279: INFO: Created: latency-svc-jp7b7 Dec 19 11:07:57.300: INFO: Got endpoints: latency-svc-jp7b7 [2.668643535s] Dec 19 11:07:57.378: INFO: Created: latency-svc-mv9mb Dec 19 11:07:57.469: INFO: Got endpoints: latency-svc-mv9mb [2.540044722s] Dec 19 11:07:57.502: INFO: Created: latency-svc-vbkwd Dec 19 11:07:57.513: INFO: Got endpoints: latency-svc-vbkwd [2.32627947s] Dec 19 11:07:57.679: INFO: Created: latency-svc-rthfh Dec 19 11:07:57.712: INFO: Got endpoints: latency-svc-rthfh [2.524746705s] Dec 19 11:07:57.862: INFO: Created: latency-svc-qpxwr Dec 19 11:07:57.902: INFO: Got endpoints: latency-svc-qpxwr [2.503479845s] Dec 19 11:07:58.258: INFO: Created: latency-svc-dgjct Dec 19 11:07:58.304: INFO: Got endpoints: latency-svc-dgjct [2.682078252s] Dec 19 11:07:59.093: INFO: Created: latency-svc-hnql7 Dec 19 11:07:59.109: INFO: Got endpoints: latency-svc-hnql7 [3.200290504s] Dec 19 11:07:59.385: INFO: Created: latency-svc-h28tf Dec 19 11:07:59.416: INFO: Got endpoints: latency-svc-h28tf [3.254344418s] Dec 19 11:07:59.552: INFO: Created: latency-svc-qtg78 Dec 19 11:07:59.575: INFO: Got endpoints: latency-svc-qtg78 [3.039925341s] Dec 19 11:07:59.673: INFO: Created: latency-svc-vv7cf Dec 19 11:07:59.680: INFO: Got endpoints: latency-svc-vv7cf [3.143994538s] Dec 19 11:08:00.426: INFO: Created: latency-svc-2s95p Dec 19 11:08:00.576: INFO: Got endpoints: latency-svc-2s95p [3.918305812s] Dec 19 11:08:02.215: INFO: Created: latency-svc-rbc67 Dec 19 11:08:02.280: INFO: Got endpoints: latency-svc-rbc67 [5.444409672s] Dec 19 11:08:02.750: INFO: Created: latency-svc-7mlhf Dec 19 11:08:02.773: INFO: Got endpoints: latency-svc-7mlhf [5.73726797s] Dec 19 11:08:03.134: INFO: Created: latency-svc-lsmfv Dec 19 11:08:03.186: INFO: Got endpoints: latency-svc-lsmfv [6.086036529s] Dec 19 11:08:03.305: INFO: Created: latency-svc-lh59m Dec 19 11:08:03.308: INFO: Got endpoints: latency-svc-lh59m [6.048029113s] Dec 19 11:08:03.438: INFO: Created: latency-svc-hcxrt Dec 19 11:08:03.449: INFO: Got endpoints: latency-svc-hcxrt [6.148729333s] Dec 19 11:08:03.588: INFO: Created: latency-svc-7sbd4 Dec 19 11:08:03.612: INFO: Got endpoints: latency-svc-7sbd4 [6.142746383s] Dec 19 11:08:03.805: INFO: Created: latency-svc-6745t Dec 19 11:08:03.827: INFO: Got endpoints: latency-svc-6745t [6.313421059s] Dec 19 11:08:04.086: INFO: Created: latency-svc-ztnrd Dec 19 11:08:04.115: INFO: Got endpoints: latency-svc-ztnrd [6.402211811s] Dec 19 11:08:04.169: INFO: Created: latency-svc-8wzcc Dec 19 11:08:04.297: INFO: Got endpoints: latency-svc-8wzcc [6.394570636s] Dec 19 11:08:04.319: INFO: Created: latency-svc-f5x65 Dec 19 11:08:04.359: INFO: Got endpoints: latency-svc-f5x65 [6.055121829s] Dec 19 11:08:04.514: INFO: Created: latency-svc-skplg Dec 19 11:08:04.524: INFO: Got endpoints: latency-svc-skplg [5.415070494s] Dec 19 11:08:04.614: INFO: Created: latency-svc-vk9v9 Dec 19 11:08:04.689: INFO: Got endpoints: latency-svc-vk9v9 [5.272997098s] Dec 19 11:08:04.737: INFO: Created: latency-svc-blhks Dec 19 11:08:04.749: INFO: Got endpoints: latency-svc-blhks [5.173720557s] Dec 19 11:08:04.899: INFO: Created: latency-svc-xtb5b Dec 19 11:08:04.906: INFO: Got endpoints: latency-svc-xtb5b [5.226167986s] Dec 19 11:08:04.957: INFO: Created: latency-svc-l82tb Dec 19 11:08:04.979: INFO: Got endpoints: latency-svc-l82tb [4.403262722s] Dec 19 11:08:05.223: INFO: Created: latency-svc-2vb4w Dec 19 11:08:05.223: INFO: Got endpoints: latency-svc-2vb4w [2.94287019s] Dec 19 11:08:05.340: INFO: Created: latency-svc-kjg59 Dec 19 11:08:05.365: INFO: Got endpoints: latency-svc-kjg59 [2.591102328s] Dec 19 11:08:05.408: INFO: Created: latency-svc-jz5lz Dec 19 11:08:05.527: INFO: Got endpoints: latency-svc-jz5lz [2.340772367s] Dec 19 11:08:05.549: INFO: Created: latency-svc-mbvn6 Dec 19 11:08:05.581: INFO: Got endpoints: latency-svc-mbvn6 [2.273032498s] Dec 19 11:08:05.698: INFO: Created: latency-svc-ns665 Dec 19 11:08:05.716: INFO: Got endpoints: latency-svc-ns665 [2.266481063s] Dec 19 11:08:05.786: INFO: Created: latency-svc-f2bnb Dec 19 11:08:05.786: INFO: Got endpoints: latency-svc-f2bnb [2.174170353s] Dec 19 11:08:05.977: INFO: Created: latency-svc-lnw97 Dec 19 11:08:05.987: INFO: Got endpoints: latency-svc-lnw97 [2.159774062s] Dec 19 11:08:06.286: INFO: Created: latency-svc-t5n6j Dec 19 11:08:06.305: INFO: Got endpoints: latency-svc-t5n6j [2.190728845s] Dec 19 11:08:06.566: INFO: Created: latency-svc-rd6rw Dec 19 11:08:06.584: INFO: Got endpoints: latency-svc-rd6rw [2.28715119s] Dec 19 11:08:06.780: INFO: Created: latency-svc-4sj94 Dec 19 11:08:06.793: INFO: Got endpoints: latency-svc-4sj94 [2.433835401s] Dec 19 11:08:06.867: INFO: Created: latency-svc-lw6m7 Dec 19 11:08:06.922: INFO: Got endpoints: latency-svc-lw6m7 [2.397833064s] Dec 19 11:08:06.952: INFO: Created: latency-svc-htrx7 Dec 19 11:08:06.974: INFO: Got endpoints: latency-svc-htrx7 [2.284739493s] Dec 19 11:08:07.128: INFO: Created: latency-svc-wk95b Dec 19 11:08:07.148: INFO: Got endpoints: latency-svc-wk95b [2.398679337s] Dec 19 11:08:07.326: INFO: Created: latency-svc-cjcdm Dec 19 11:08:07.345: INFO: Got endpoints: latency-svc-cjcdm [2.439006728s] Dec 19 11:08:07.952: INFO: Created: latency-svc-2q8n9 Dec 19 11:08:08.319: INFO: Created: latency-svc-7qxwt Dec 19 11:08:08.338: INFO: Got endpoints: latency-svc-2q8n9 [3.358841365s] Dec 19 11:08:08.340: INFO: Got endpoints: latency-svc-7qxwt [3.117064354s] Dec 19 11:08:08.766: INFO: Created: latency-svc-8dq24 Dec 19 11:08:08.853: INFO: Got endpoints: latency-svc-8dq24 [3.488542427s] Dec 19 11:08:08.892: INFO: Created: latency-svc-z6lb6 Dec 19 11:08:08.897: INFO: Got endpoints: latency-svc-z6lb6 [3.37007577s] Dec 19 11:08:08.935: INFO: Created: latency-svc-fc4n8 Dec 19 11:08:09.041: INFO: Got endpoints: latency-svc-fc4n8 [3.459606926s] Dec 19 11:08:09.092: INFO: Created: latency-svc-hbfkb Dec 19 11:08:09.107: INFO: Got endpoints: latency-svc-hbfkb [3.391388563s] Dec 19 11:08:09.298: INFO: Created: latency-svc-dp47g Dec 19 11:08:09.345: INFO: Got endpoints: latency-svc-dp47g [3.559113046s] Dec 19 11:08:09.374: INFO: Created: latency-svc-btxkp Dec 19 11:08:09.469: INFO: Got endpoints: latency-svc-btxkp [3.481835874s] Dec 19 11:08:09.490: INFO: Created: latency-svc-lvtvl Dec 19 11:08:09.538: INFO: Got endpoints: latency-svc-lvtvl [3.232369733s] Dec 19 11:08:09.651: INFO: Created: latency-svc-pjxz9 Dec 19 11:08:09.667: INFO: Got endpoints: latency-svc-pjxz9 [3.082326714s] Dec 19 11:08:09.711: INFO: Created: latency-svc-26k49 Dec 19 11:08:09.723: INFO: Got endpoints: latency-svc-26k49 [2.92978073s] Dec 19 11:08:09.836: INFO: Created: latency-svc-zbqnw Dec 19 11:08:09.876: INFO: Got endpoints: latency-svc-zbqnw [2.953679274s] Dec 19 11:08:09.989: INFO: Created: latency-svc-mhj59 Dec 19 11:08:09.996: INFO: Got endpoints: latency-svc-mhj59 [3.02185129s] Dec 19 11:08:10.172: INFO: Created: latency-svc-mgwnd Dec 19 11:08:10.181: INFO: Got endpoints: latency-svc-mgwnd [3.033306695s] Dec 19 11:08:10.242: INFO: Created: latency-svc-ls6f6 Dec 19 11:08:10.251: INFO: Got endpoints: latency-svc-ls6f6 [2.905061641s] Dec 19 11:08:10.459: INFO: Created: latency-svc-nc9tt Dec 19 11:08:10.622: INFO: Got endpoints: latency-svc-nc9tt [2.283804492s] Dec 19 11:08:10.662: INFO: Created: latency-svc-vskmj Dec 19 11:08:10.802: INFO: Got endpoints: latency-svc-vskmj [2.462337913s] Dec 19 11:08:10.830: INFO: Created: latency-svc-6fmm8 Dec 19 11:08:10.871: INFO: Got endpoints: latency-svc-6fmm8 [2.017394976s] Dec 19 11:08:11.012: INFO: Created: latency-svc-hplp8 Dec 19 11:08:11.030: INFO: Got endpoints: latency-svc-hplp8 [2.132908709s] Dec 19 11:08:11.066: INFO: Created: latency-svc-p8ps8 Dec 19 11:08:11.158: INFO: Got endpoints: latency-svc-p8ps8 [2.117382473s] Dec 19 11:08:11.205: INFO: Created: latency-svc-gqtp7 Dec 19 11:08:11.241: INFO: Got endpoints: latency-svc-gqtp7 [2.132959242s] Dec 19 11:08:11.416: INFO: Created: latency-svc-94bqk Dec 19 11:08:11.456: INFO: Got endpoints: latency-svc-94bqk [2.110402722s] Dec 19 11:08:11.635: INFO: Created: latency-svc-wnpnj Dec 19 11:08:11.652: INFO: Got endpoints: latency-svc-wnpnj [2.182409308s] Dec 19 11:08:11.825: INFO: Created: latency-svc-wtq2q Dec 19 11:08:11.864: INFO: Got endpoints: latency-svc-wtq2q [2.325567222s] Dec 19 11:08:12.202: INFO: Created: latency-svc-lpznr Dec 19 11:08:12.257: INFO: Got endpoints: latency-svc-lpznr [2.589835925s] Dec 19 11:08:12.323: INFO: Created: latency-svc-5lkqw Dec 19 11:08:12.463: INFO: Got endpoints: latency-svc-5lkqw [2.739290816s] Dec 19 11:08:12.486: INFO: Created: latency-svc-5t2hz Dec 19 11:08:12.501: INFO: Got endpoints: latency-svc-5t2hz [2.624092452s] Dec 19 11:08:12.651: INFO: Created: latency-svc-snm2r Dec 19 11:08:12.676: INFO: Got endpoints: latency-svc-snm2r [2.678992213s] Dec 19 11:08:12.745: INFO: Created: latency-svc-f4lmn Dec 19 11:08:12.879: INFO: Got endpoints: latency-svc-f4lmn [2.697887691s] Dec 19 11:08:12.939: INFO: Created: latency-svc-ch94t Dec 19 11:08:12.966: INFO: Got endpoints: latency-svc-ch94t [2.715524882s] Dec 19 11:08:13.104: INFO: Created: latency-svc-5z7jz Dec 19 11:08:13.148: INFO: Created: latency-svc-p4652 Dec 19 11:08:13.151: INFO: Got endpoints: latency-svc-5z7jz [2.52818125s] Dec 19 11:08:13.248: INFO: Got endpoints: latency-svc-p4652 [2.4446576s] Dec 19 11:08:13.294: INFO: Created: latency-svc-fhv8h Dec 19 11:08:13.315: INFO: Got endpoints: latency-svc-fhv8h [2.443721718s] Dec 19 11:08:13.453: INFO: Created: latency-svc-29xvz Dec 19 11:08:13.459: INFO: Got endpoints: latency-svc-29xvz [2.428327163s] Dec 19 11:08:13.459: INFO: Latencies: [220.172654ms 221.086395ms 242.297508ms 495.242329ms 669.508326ms 883.660519ms 1.111896189s 1.186311372s 1.282214745s 1.357157759s 1.526930036s 1.69331354s 1.946674218s 2.017394976s 2.022511545s 2.110402722s 2.117382473s 2.132908709s 2.132959242s 2.140725244s 2.159774062s 2.174170353s 2.182409308s 2.186536446s 2.190728845s 2.264131867s 2.266481063s 2.273032498s 2.283804492s 2.284739493s 2.28715119s 2.292851475s 2.298130848s 2.325567222s 2.32627947s 2.340772367s 2.353014329s 2.391738701s 2.397833064s 2.398679337s 2.428327163s 2.433835401s 2.43799413s 2.439006728s 2.44340791s 2.443721718s 2.4446576s 2.445319996s 2.453941294s 2.461942966s 2.462337913s 2.500238346s 2.502416611s 2.502469821s 2.503479845s 2.509695499s 2.511926784s 2.516207774s 2.524746705s 2.52818125s 2.530734974s 2.537742337s 2.540044722s 2.545027712s 2.557842943s 2.560032838s 2.560306127s 2.568176207s 2.571829584s 2.581061774s 2.581116293s 2.589835925s 2.591102328s 2.600567093s 2.624092452s 2.625559041s 2.627251452s 2.630092294s 2.635923515s 2.64181306s 2.652112367s 2.654413184s 2.655955902s 2.656633044s 2.668643535s 2.678992213s 2.681893917s 2.682078252s 2.69012362s 2.690137855s 2.692091604s 2.693806152s 2.697887691s 2.698302825s 2.713731521s 2.715246447s 2.715524882s 2.717454885s 2.724611407s 2.739290816s 2.744223209s 2.754866369s 2.759190709s 2.767131847s 2.771457449s 2.794036173s 2.803110105s 2.828817903s 2.829391076s 2.832876494s 2.838050401s 2.852324215s 2.854400678s 2.857321619s 2.872809289s 2.890506771s 2.902376892s 2.905061641s 2.90798734s 2.909769031s 2.929248163s 2.92978073s 2.935700217s 2.94287019s 2.943469338s 2.950473364s 2.953679274s 2.953835578s 2.957607457s 2.963011093s 2.96585403s 2.975753041s 2.986379338s 2.98646719s 2.996788391s 2.99764305s 3.002255295s 3.008860734s 3.02144548s 3.02185129s 3.032278453s 3.033306695s 3.039925341s 3.048540837s 3.055850887s 3.064221486s 3.073222015s 3.074511758s 3.078832183s 3.082326714s 3.090155633s 3.092835369s 3.105148329s 3.110201988s 3.117064354s 3.128532458s 3.142798211s 3.143994538s 3.161821299s 3.19092976s 3.196748628s 3.200290504s 3.21035663s 3.211897649s 3.215263749s 3.216708299s 3.232369733s 3.247988472s 3.254344418s 3.263268827s 3.264176792s 3.265214059s 3.268178071s 3.283809986s 3.292088419s 3.325302655s 3.358841365s 3.364310261s 3.37007577s 3.391388563s 3.459606926s 3.481835874s 3.488542427s 3.559113046s 3.918305812s 4.403262722s 5.173720557s 5.226167986s 5.272997098s 5.415070494s 5.444409672s 5.73726797s 6.048029113s 6.055121829s 6.086036529s 6.142746383s 6.148729333s 6.313421059s 6.394570636s 6.402211811s] Dec 19 11:08:13.459: INFO: 50 %ile: 2.744223209s Dec 19 11:08:13.459: INFO: 90 %ile: 3.459606926s Dec 19 11:08:13.459: INFO: 99 %ile: 6.394570636s Dec 19 11:08:13.459: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:08:13.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-rs29h" for this suite. Dec 19 11:09:05.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:09:05.643: INFO: namespace: e2e-tests-svc-latency-rs29h, resource: bindings, ignored listing per whitelist Dec 19 11:09:05.699: INFO: namespace e2e-tests-svc-latency-rs29h deletion completed in 52.228943776s • [SLOW TEST:104.318 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:09:05.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 19 11:09:16.652: INFO: Successfully updated pod "annotationupdatef8c1f377-224f-11ea-a3c6-0242ac110004" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:09:18.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gvrk4" for this suite. Dec 19 11:09:40.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:09:41.042: INFO: namespace: e2e-tests-downward-api-gvrk4, resource: bindings, ignored listing per whitelist Dec 19 11:09:41.096: INFO: namespace e2e-tests-downward-api-gvrk4 deletion completed in 22.232883355s • [SLOW TEST:35.397 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:09:41.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 19 11:09:41.319: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 19 11:09:41.348: INFO: Waiting for terminating namespaces to be deleted... Dec 19 11:09:41.355: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 19 11:09:41.371: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 19 11:09:41.371: INFO: Container coredns ready: true, restart count 0 Dec 19 11:09:41.371: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 19 11:09:41.371: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 19 11:09:41.371: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 19 11:09:41.371: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 19 11:09:41.371: INFO: Container coredns ready: true, restart count 0 Dec 19 11:09:41.371: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 19 11:09:41.371: INFO: Container kube-proxy ready: true, restart count 0 Dec 19 11:09:41.371: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 19 11:09:41.371: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 19 11:09:41.371: INFO: Container weave ready: true, restart count 0 Dec 19 11:09:41.371: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-152058bb-2250-11ea-a3c6-0242ac110004 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-152058bb-2250-11ea-a3c6-0242ac110004 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-152058bb-2250-11ea-a3c6-0242ac110004 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:10:07.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-7hxhk" for this suite. Dec 19 11:10:22.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:10:22.603: INFO: namespace: e2e-tests-sched-pred-7hxhk, resource: bindings, ignored listing per whitelist Dec 19 11:10:22.620: INFO: namespace e2e-tests-sched-pred-7hxhk deletion completed in 14.496648483s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:41.524 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:10:22.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Dec 19 11:10:22.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 19 11:10:23.097: INFO: stderr: "" Dec 19 11:10:23.097: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:10:23.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tdn4k" for this suite. Dec 19 11:10:29.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:10:29.272: INFO: namespace: e2e-tests-kubectl-tdn4k, resource: bindings, ignored listing per whitelist Dec 19 11:10:29.418: INFO: namespace e2e-tests-kubectl-tdn4k deletion completed in 6.306132167s • [SLOW TEST:6.797 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:10:29.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:10:29.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-ktqbt" to be "success or failure" Dec 19 11:10:29.731: INFO: Pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 76.533557ms Dec 19 11:10:31.744: INFO: Pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090055755s Dec 19 11:10:33.764: INFO: Pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109438407s Dec 19 11:10:36.544: INFO: Pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.889572393s Dec 19 11:10:38.580: INFO: Pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.925355401s Dec 19 11:10:40.622: INFO: Pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.968029324s STEP: Saw pod success Dec 19 11:10:40.622: INFO: Pod "downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:10:40.641: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:10:41.645: INFO: Waiting for pod downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004 to disappear Dec 19 11:10:41.666: INFO: Pod downwardapi-volume-2aa479fc-2250-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:10:41.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ktqbt" for this suite. Dec 19 11:10:47.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:10:48.093: INFO: namespace: e2e-tests-projected-ktqbt, resource: bindings, ignored listing per whitelist Dec 19 11:10:48.712: INFO: namespace e2e-tests-projected-ktqbt deletion completed in 7.029066837s • [SLOW TEST:19.294 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:10:48.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-3645b5dd-2250-11ea-a3c6-0242ac110004 STEP: Creating secret with name s-test-opt-upd-3645b65f-2250-11ea-a3c6-0242ac110004 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3645b5dd-2250-11ea-a3c6-0242ac110004 STEP: Updating secret s-test-opt-upd-3645b65f-2250-11ea-a3c6-0242ac110004 STEP: Creating secret with name s-test-opt-create-3645b689-2250-11ea-a3c6-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:12:09.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-77mcd" for this suite. Dec 19 11:12:34.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:12:34.768: INFO: namespace: e2e-tests-secrets-77mcd, resource: bindings, ignored listing per whitelist Dec 19 11:12:34.782: INFO: namespace e2e-tests-secrets-77mcd deletion completed in 25.075471857s • [SLOW TEST:106.070 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:12:34.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 19 11:12:35.023: INFO: Waiting up to 5m0s for pod "pod-755e9474-2250-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-g7xnm" to be "success or failure" Dec 19 11:12:35.045: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.886453ms Dec 19 11:12:37.185: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162197354s Dec 19 11:12:39.223: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199780173s Dec 19 11:12:41.237: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213910494s Dec 19 11:12:43.609: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.586319064s Dec 19 11:12:45.644: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.620964968s Dec 19 11:12:47.675: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 12.651515397s Dec 19 11:12:50.225: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.20156621s STEP: Saw pod success Dec 19 11:12:50.225: INFO: Pod "pod-755e9474-2250-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:12:50.233: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-755e9474-2250-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:12:50.499: INFO: Waiting for pod pod-755e9474-2250-11ea-a3c6-0242ac110004 to disappear Dec 19 11:12:50.517: INFO: Pod pod-755e9474-2250-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:12:50.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-g7xnm" for this suite. Dec 19 11:12:56.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:12:56.730: INFO: namespace: e2e-tests-emptydir-g7xnm, resource: bindings, ignored listing per whitelist Dec 19 11:12:56.865: INFO: namespace e2e-tests-emptydir-g7xnm deletion completed in 6.324081274s • [SLOW TEST:22.082 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:12:56.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-jlct4/configmap-test-828c061d-2250-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:12:57.208: INFO: Waiting up to 5m0s for pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-jlct4" to be "success or failure" Dec 19 11:12:57.232: INFO: Pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.852715ms Dec 19 11:12:59.247: INFO: Pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039519824s Dec 19 11:13:01.261: INFO: Pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053054376s Dec 19 11:13:03.430: INFO: Pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222204151s Dec 19 11:13:05.445: INFO: Pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237660987s Dec 19 11:13:07.472: INFO: Pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.264590595s STEP: Saw pod success Dec 19 11:13:07.472: INFO: Pod "pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:13:07.478: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004 container env-test: STEP: delete the pod Dec 19 11:13:07.604: INFO: Waiting for pod pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004 to disappear Dec 19 11:13:07.648: INFO: Pod pod-configmaps-828f364c-2250-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:13:07.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jlct4" for this suite. Dec 19 11:13:13.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:13:13.917: INFO: namespace: e2e-tests-configmap-jlct4, resource: bindings, ignored listing per whitelist Dec 19 11:13:14.052: INFO: namespace e2e-tests-configmap-jlct4 deletion completed in 6.259266433s • [SLOW TEST:17.187 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:13:14.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:13:14.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-7jmdc" to be "success or failure" Dec 19 11:13:14.475: INFO: Pod "downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 141.284544ms Dec 19 11:13:16.510: INFO: Pod "downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176391038s Dec 19 11:13:18.564: INFO: Pod "downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230433457s Dec 19 11:13:20.891: INFO: Pod "downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55681182s Dec 19 11:13:22.973: INFO: Pod "downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.639227009s STEP: Saw pod success Dec 19 11:13:22.973: INFO: Pod "downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:13:22.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:13:23.182: INFO: Waiting for pod downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004 to disappear Dec 19 11:13:23.228: INFO: Pod downwardapi-volume-8cc762aa-2250-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:13:23.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7jmdc" for this suite. Dec 19 11:13:29.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:13:29.451: INFO: namespace: e2e-tests-downward-api-7jmdc, resource: bindings, ignored listing per whitelist Dec 19 11:13:29.512: INFO: namespace e2e-tests-downward-api-7jmdc deletion completed in 6.265277904s • [SLOW TEST:15.459 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:13:29.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:13:39.835: INFO: Waiting up to 5m0s for pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004" in namespace "e2e-tests-pods-2b8jd" to be "success or failure" Dec 19 11:13:39.872: INFO: Pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 36.361485ms Dec 19 11:13:41.897: INFO: Pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06178543s Dec 19 11:13:43.938: INFO: Pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102457262s Dec 19 11:13:45.955: INFO: Pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119425825s Dec 19 11:13:47.972: INFO: Pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136168054s Dec 19 11:13:49.987: INFO: Pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.151443359s STEP: Saw pod success Dec 19 11:13:49.987: INFO: Pod "client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:13:49.994: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004 container env3cont: STEP: delete the pod Dec 19 11:13:50.160: INFO: Waiting for pod client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004 to disappear Dec 19 11:13:50.193: INFO: Pod client-envvars-9bfa3685-2250-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:13:50.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2b8jd" for this suite. Dec 19 11:14:44.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:14:44.495: INFO: namespace: e2e-tests-pods-2b8jd, resource: bindings, ignored listing per whitelist Dec 19 11:14:44.809: INFO: namespace e2e-tests-pods-2b8jd deletion completed in 54.475968583s • [SLOW TEST:75.297 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:14:44.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9dqxp Dec 19 11:14:55.322: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9dqxp STEP: checking the pod's current state and verifying that restartCount is present Dec 19 11:14:55.329: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:18:56.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9dqxp" for this suite. Dec 19 11:19:04.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:19:04.384: INFO: namespace: e2e-tests-container-probe-9dqxp, resource: bindings, ignored listing per whitelist Dec 19 11:19:04.420: INFO: namespace e2e-tests-container-probe-9dqxp deletion completed in 8.32910877s • [SLOW TEST:259.610 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:19:04.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-9b49g [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-9b49g STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-9b49g Dec 19 11:19:04.824: INFO: Found 0 stateful pods, waiting for 1 Dec 19 11:19:14.861: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Dec 19 11:19:24.837: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 19 11:19:24.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 19 11:19:25.414: INFO: stderr: "" Dec 19 11:19:25.415: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 19 11:19:25.415: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 19 11:19:25.435: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 19 11:19:35.615: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 19 11:19:35.615: INFO: Waiting for statefulset status.replicas updated to 0 Dec 19 11:19:35.673: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999827s Dec 19 11:19:36.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.985588906s Dec 19 11:19:37.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.969136148s Dec 19 11:19:38.729: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.949059246s Dec 19 11:19:39.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.928859556s Dec 19 11:19:40.765: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.912865621s Dec 19 11:19:41.781: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.893521605s Dec 19 11:19:43.956: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.876794326s Dec 19 11:19:44.977: INFO: Verifying statefulset ss doesn't scale past 1 for another 701.702067ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-9b49g Dec 19 11:19:45.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 19 11:19:46.882: INFO: stderr: "" Dec 19 11:19:46.882: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 19 11:19:46.882: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 19 11:19:46.970: INFO: Found 2 stateful pods, waiting for 3 Dec 19 11:19:56.988: INFO: Found 2 stateful pods, waiting for 3 Dec 19 11:20:07.002: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 19 11:20:07.002: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 19 11:20:07.002: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 19 11:20:07.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 19 11:20:07.480: INFO: stderr: "" Dec 19 11:20:07.480: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 19 11:20:07.480: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 19 11:20:07.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 19 11:20:08.149: INFO: stderr: "" Dec 19 11:20:08.149: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 19 11:20:08.149: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 19 11:20:08.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 19 11:20:08.661: INFO: stderr: "" Dec 19 11:20:08.661: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 19 11:20:08.661: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 19 11:20:08.661: INFO: Waiting for statefulset status.replicas updated to 0 Dec 19 11:20:08.672: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 19 11:20:18.718: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 19 11:20:18.718: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 19 11:20:18.718: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 19 11:20:18.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999742s Dec 19 11:20:19.777: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989833097s Dec 19 11:20:20.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.960720303s Dec 19 11:20:21.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.938827391s Dec 19 11:20:22.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.920947378s Dec 19 11:20:23.929: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.891736962s Dec 19 11:20:24.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.808056539s Dec 19 11:20:25.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.783270818s Dec 19 11:20:26.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.768932453s Dec 19 11:20:28.023: INFO: Verifying statefulset ss doesn't scale past 3 for another 746.132206ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-9b49g Dec 19 11:20:29.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 19 11:20:29.626: INFO: stderr: "" Dec 19 11:20:29.626: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 19 11:20:29.626: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 19 11:20:29.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 19 11:20:30.155: INFO: stderr: "" Dec 19 11:20:30.155: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 19 11:20:30.155: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 19 11:20:30.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9b49g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 19 11:20:30.635: INFO: stderr: "" Dec 19 11:20:30.635: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 19 11:20:30.635: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 19 11:20:30.635: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 19 11:20:50.726: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9b49g Dec 19 11:20:50.744: INFO: Scaling statefulset ss to 0 Dec 19 11:20:50.771: INFO: Waiting for statefulset status.replicas updated to 0 Dec 19 11:20:50.776: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:20:50.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-9b49g" for this suite. Dec 19 11:20:58.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:20:59.125: INFO: namespace: e2e-tests-statefulset-9b49g, resource: bindings, ignored listing per whitelist Dec 19 11:20:59.244: INFO: namespace e2e-tests-statefulset-9b49g deletion completed in 8.39699108s • [SLOW TEST:114.823 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:20:59.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cfq94 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 19 11:20:59.483: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 19 11:21:39.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-cfq94 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 19 11:21:39.728: INFO: >>> kubeConfig: /root/.kube/config Dec 19 11:21:40.279: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:21:40.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cfq94" for this suite. Dec 19 11:22:04.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:22:04.399: INFO: namespace: e2e-tests-pod-network-test-cfq94, resource: bindings, ignored listing per whitelist Dec 19 11:22:04.603: INFO: namespace e2e-tests-pod-network-test-cfq94 deletion completed in 24.302538701s • [SLOW TEST:65.359 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:22:04.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 19 11:22:04.903: INFO: Waiting up to 5m0s for pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-d9f2h" to be "success or failure" Dec 19 11:22:04.954: INFO: Pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 50.610049ms Dec 19 11:22:07.909: INFO: Pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005446902s Dec 19 11:22:09.938: INFO: Pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.034733559s Dec 19 11:22:11.959: INFO: Pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.055996315s Dec 19 11:22:14.005: INFO: Pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.101989297s Dec 19 11:22:16.019: INFO: Pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.116057939s STEP: Saw pod success Dec 19 11:22:16.019: INFO: Pod "pod-c905f5e3-2251-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:22:16.024: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c905f5e3-2251-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:22:16.505: INFO: Waiting for pod pod-c905f5e3-2251-11ea-a3c6-0242ac110004 to disappear Dec 19 11:22:16.600: INFO: Pod pod-c905f5e3-2251-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:22:16.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d9f2h" for this suite. Dec 19 11:22:22.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:22:22.742: INFO: namespace: e2e-tests-emptydir-d9f2h, resource: bindings, ignored listing per whitelist Dec 19 11:22:22.750: INFO: namespace e2e-tests-emptydir-d9f2h deletion completed in 6.128280681s • [SLOW TEST:18.145 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:22:22.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 19 11:22:22.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:25.092: INFO: stderr: "" Dec 19 11:22:25.092: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 19 11:22:25.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:25.297: INFO: stderr: "" Dec 19 11:22:25.298: INFO: stdout: "update-demo-nautilus-k2m9s update-demo-nautilus-zf9lh " Dec 19 11:22:25.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k2m9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:25.427: INFO: stderr: "" Dec 19 11:22:25.428: INFO: stdout: "" Dec 19 11:22:25.428: INFO: update-demo-nautilus-k2m9s is created but not running Dec 19 11:22:30.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:30.603: INFO: stderr: "" Dec 19 11:22:30.603: INFO: stdout: "update-demo-nautilus-k2m9s update-demo-nautilus-zf9lh " Dec 19 11:22:30.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k2m9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:30.749: INFO: stderr: "" Dec 19 11:22:30.749: INFO: stdout: "" Dec 19 11:22:30.749: INFO: update-demo-nautilus-k2m9s is created but not running Dec 19 11:22:35.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:36.062: INFO: stderr: "" Dec 19 11:22:36.062: INFO: stdout: "update-demo-nautilus-k2m9s update-demo-nautilus-zf9lh " Dec 19 11:22:36.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k2m9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:36.457: INFO: stderr: "" Dec 19 11:22:36.457: INFO: stdout: "" Dec 19 11:22:36.457: INFO: update-demo-nautilus-k2m9s is created but not running Dec 19 11:22:41.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:41.618: INFO: stderr: "" Dec 19 11:22:41.618: INFO: stdout: "update-demo-nautilus-k2m9s update-demo-nautilus-zf9lh " Dec 19 11:22:41.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k2m9s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:41.726: INFO: stderr: "" Dec 19 11:22:41.726: INFO: stdout: "true" Dec 19 11:22:41.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k2m9s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:41.850: INFO: stderr: "" Dec 19 11:22:41.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:22:41.851: INFO: validating pod update-demo-nautilus-k2m9s Dec 19 11:22:41.921: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:22:41.921: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:22:41.922: INFO: update-demo-nautilus-k2m9s is verified up and running Dec 19 11:22:41.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zf9lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:42.070: INFO: stderr: "" Dec 19 11:22:42.071: INFO: stdout: "true" Dec 19 11:22:42.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zf9lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:42.207: INFO: stderr: "" Dec 19 11:22:42.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:22:42.207: INFO: validating pod update-demo-nautilus-zf9lh Dec 19 11:22:42.230: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:22:42.230: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:22:42.231: INFO: update-demo-nautilus-zf9lh is verified up and running STEP: using delete to clean up resources Dec 19 11:22:42.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:42.419: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:22:42.419: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 19 11:22:42.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-5z9lg' Dec 19 11:22:42.615: INFO: stderr: "No resources found.\n" Dec 19 11:22:42.615: INFO: stdout: "" Dec 19 11:22:42.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-5z9lg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 19 11:22:42.760: INFO: stderr: "" Dec 19 11:22:42.760: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:22:42.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5z9lg" for this suite. Dec 19 11:23:06.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:23:06.937: INFO: namespace: e2e-tests-kubectl-5z9lg, resource: bindings, ignored listing per whitelist Dec 19 11:23:06.977: INFO: namespace e2e-tests-kubectl-5z9lg deletion completed in 24.206284395s • [SLOW TEST:44.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:23:06.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:23:07.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-xkhvm" to be "success or failure" Dec 19 11:23:07.270: INFO: Pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 36.925415ms Dec 19 11:23:09.302: INFO: Pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069106181s Dec 19 11:23:11.314: INFO: Pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081396211s Dec 19 11:23:13.912: INFO: Pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.679547823s Dec 19 11:23:15.926: INFO: Pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.693305836s Dec 19 11:23:17.942: INFO: Pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.709105184s STEP: Saw pod success Dec 19 11:23:17.942: INFO: Pod "downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:23:17.948: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:23:18.756: INFO: Waiting for pod downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004 to disappear Dec 19 11:23:18.768: INFO: Pod downwardapi-volume-ee2c92bd-2251-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:23:18.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xkhvm" for this suite. Dec 19 11:23:24.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:23:24.921: INFO: namespace: e2e-tests-downward-api-xkhvm, resource: bindings, ignored listing per whitelist Dec 19 11:23:24.974: INFO: namespace e2e-tests-downward-api-xkhvm deletion completed in 6.197845978s • [SLOW TEST:17.997 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:23:24.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f8e9f6d4-2251-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:23:25.215: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-q95df" to be "success or failure" Dec 19 11:23:25.224: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416312ms Dec 19 11:23:27.541: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325808217s Dec 19 11:23:29.559: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343213766s Dec 19 11:23:31.575: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359712912s Dec 19 11:23:33.594: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378420054s Dec 19 11:23:35.626: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.410535971s Dec 19 11:23:40.047: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.831988686s STEP: Saw pod success Dec 19 11:23:40.048: INFO: Pod "pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:23:40.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 19 11:23:42.021: INFO: Waiting for pod pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004 to disappear Dec 19 11:23:42.261: INFO: Pod pod-configmaps-f8ead58f-2251-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:23:42.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-q95df" for this suite. Dec 19 11:23:48.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:23:48.662: INFO: namespace: e2e-tests-configmap-q95df, resource: bindings, ignored listing per whitelist Dec 19 11:23:48.699: INFO: namespace e2e-tests-configmap-q95df deletion completed in 6.407464915s • [SLOW TEST:23.724 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:23:48.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 19 11:24:09.212: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 19 11:24:09.241: INFO: Pod pod-with-poststart-http-hook still exists Dec 19 11:24:11.242: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 19 11:24:11.935: INFO: Pod pod-with-poststart-http-hook still exists Dec 19 11:24:13.241: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 19 11:24:13.939: INFO: Pod pod-with-poststart-http-hook still exists Dec 19 11:24:15.242: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 19 11:24:15.251: INFO: Pod pod-with-poststart-http-hook still exists Dec 19 11:24:17.242: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 19 11:24:17.262: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:24:17.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nfqzm" for this suite. Dec 19 11:24:41.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:24:41.520: INFO: namespace: e2e-tests-container-lifecycle-hook-nfqzm, resource: bindings, ignored listing per whitelist Dec 19 11:24:41.678: INFO: namespace e2e-tests-container-lifecycle-hook-nfqzm deletion completed in 24.326223264s • [SLOW TEST:52.978 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:24:41.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Dec 19 11:24:41.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 19 11:24:41.968: INFO: stderr: "" Dec 19 11:24:41.968: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:24:41.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nvxkh" for this suite. Dec 19 11:24:48.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:24:48.093: INFO: namespace: e2e-tests-kubectl-nvxkh, resource: bindings, ignored listing per whitelist Dec 19 11:24:48.163: INFO: namespace e2e-tests-kubectl-nvxkh deletion completed in 6.186223867s • [SLOW TEST:6.485 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:24:48.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004 Dec 19 11:24:48.591: INFO: Pod name my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004: Found 0 pods out of 1 Dec 19 11:24:53.957: INFO: Pod name my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004: Found 1 pods out of 1 Dec 19 11:24:53.957: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004" are running Dec 19 11:24:58.109: INFO: Pod "my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004-689cx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 11:24:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 11:24:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 11:24:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 11:24:48 +0000 UTC Reason: Message:}]) Dec 19 11:24:58.109: INFO: Trying to dial the pod Dec 19 11:25:03.149: INFO: Controller my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004: Got expected result from replica 1 [my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004-689cx]: "my-hostname-basic-2a8826ba-2252-11ea-a3c6-0242ac110004-689cx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:25:03.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-zkg7g" for this suite. Dec 19 11:25:11.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:25:11.459: INFO: namespace: e2e-tests-replication-controller-zkg7g, resource: bindings, ignored listing per whitelist Dec 19 11:25:11.523: INFO: namespace e2e-tests-replication-controller-zkg7g deletion completed in 8.360481692s • [SLOW TEST:23.360 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:25:11.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-38792e54-2252-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:25:12.191: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-wfmwd" to be "success or failure" Dec 19 11:25:12.200: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.700048ms Dec 19 11:25:14.667: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476326536s Dec 19 11:25:16.682: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491012389s Dec 19 11:25:19.149: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958776512s Dec 19 11:25:21.339: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.148444877s Dec 19 11:25:23.359: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.167937507s Dec 19 11:25:25.378: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.187220468s STEP: Saw pod success Dec 19 11:25:25.378: INFO: Pod "pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:25:25.387: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 19 11:25:25.542: INFO: Waiting for pod pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004 to disappear Dec 19 11:25:25.562: INFO: Pod pod-projected-configmaps-387a4ee7-2252-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:25:25.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wfmwd" for this suite. Dec 19 11:25:31.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:25:31.737: INFO: namespace: e2e-tests-projected-wfmwd, resource: bindings, ignored listing per whitelist Dec 19 11:25:31.747: INFO: namespace e2e-tests-projected-wfmwd deletion completed in 6.170905327s • [SLOW TEST:20.222 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:25:31.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-447f0a4d-2252-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:25:32.038: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-jrzdf" to be "success or failure" Dec 19 11:25:32.159: INFO: Pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 121.05516ms Dec 19 11:25:34.193: INFO: Pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155618937s Dec 19 11:25:36.232: INFO: Pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194740449s Dec 19 11:25:38.796: INFO: Pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758615163s Dec 19 11:25:40.811: INFO: Pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.773194473s Dec 19 11:25:42.828: INFO: Pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.790470122s STEP: Saw pod success Dec 19 11:25:42.828: INFO: Pod "pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:25:42.844: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 19 11:25:43.054: INFO: Waiting for pod pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004 to disappear Dec 19 11:25:43.064: INFO: Pod pod-projected-configmaps-44807814-2252-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:25:43.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jrzdf" for this suite. Dec 19 11:25:49.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:25:49.287: INFO: namespace: e2e-tests-projected-jrzdf, resource: bindings, ignored listing per whitelist Dec 19 11:25:49.321: INFO: namespace e2e-tests-projected-jrzdf deletion completed in 6.251576895s • [SLOW TEST:17.574 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:25:49.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 19 11:25:49.585: INFO: Waiting up to 5m0s for pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-bc6l6" to be "success or failure" Dec 19 11:25:49.618: INFO: Pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 32.705853ms Dec 19 11:25:51.918: INFO: Pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332668423s Dec 19 11:25:53.967: INFO: Pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382278886s Dec 19 11:25:55.987: INFO: Pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40219388s Dec 19 11:25:58.010: INFO: Pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424664279s Dec 19 11:26:00.023: INFO: Pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.438211188s STEP: Saw pod success Dec 19 11:26:00.023: INFO: Pod "pod-4ef71e29-2252-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:26:00.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4ef71e29-2252-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:26:00.595: INFO: Waiting for pod pod-4ef71e29-2252-11ea-a3c6-0242ac110004 to disappear Dec 19 11:26:00.649: INFO: Pod pod-4ef71e29-2252-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:26:00.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bc6l6" for this suite. Dec 19 11:26:06.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:26:06.989: INFO: namespace: e2e-tests-emptydir-bc6l6, resource: bindings, ignored listing per whitelist Dec 19 11:26:06.993: INFO: namespace e2e-tests-emptydir-bc6l6 deletion completed in 6.293583997s • [SLOW TEST:17.672 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:26:06.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Dec 19 11:26:07.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wph9v' Dec 19 11:26:07.612: INFO: stderr: "" Dec 19 11:26:07.612: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Dec 19 11:26:09.062: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:09.062: INFO: Found 0 / 1 Dec 19 11:26:09.633: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:09.633: INFO: Found 0 / 1 Dec 19 11:26:10.648: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:10.648: INFO: Found 0 / 1 Dec 19 11:26:11.638: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:11.639: INFO: Found 0 / 1 Dec 19 11:26:12.636: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:12.636: INFO: Found 0 / 1 Dec 19 11:26:14.555: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:14.555: INFO: Found 0 / 1 Dec 19 11:26:15.285: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:15.286: INFO: Found 0 / 1 Dec 19 11:26:15.646: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:15.646: INFO: Found 0 / 1 Dec 19 11:26:16.646: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:16.646: INFO: Found 0 / 1 Dec 19 11:26:17.634: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:17.634: INFO: Found 0 / 1 Dec 19 11:26:18.649: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:18.649: INFO: Found 1 / 1 Dec 19 11:26:18.649: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 19 11:26:18.676: INFO: Selector matched 1 pods for map[app:redis] Dec 19 11:26:18.676: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 19 11:26:18.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4zjw5 redis-master --namespace=e2e-tests-kubectl-wph9v' Dec 19 11:26:18.847: INFO: stderr: "" Dec 19 11:26:18.847: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 Dec 11:26:17.075 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Dec 11:26:17.075 # Server started, Redis version 3.2.12\n1:M 19 Dec 11:26:17.076 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Dec 11:26:17.076 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 19 11:26:18.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zjw5 redis-master --namespace=e2e-tests-kubectl-wph9v --tail=1' Dec 19 11:26:18.993: INFO: stderr: "" Dec 19 11:26:18.993: INFO: stdout: "1:M 19 Dec 11:26:17.076 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 19 11:26:18.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zjw5 redis-master --namespace=e2e-tests-kubectl-wph9v --limit-bytes=1' Dec 19 11:26:19.107: INFO: stderr: "" Dec 19 11:26:19.107: INFO: stdout: " " STEP: exposing timestamps Dec 19 11:26:19.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zjw5 redis-master --namespace=e2e-tests-kubectl-wph9v --tail=1 --timestamps' Dec 19 11:26:19.224: INFO: stderr: "" Dec 19 11:26:19.224: INFO: stdout: "2019-12-19T11:26:17.076893019Z 1:M 19 Dec 11:26:17.076 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 19 11:26:21.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zjw5 redis-master --namespace=e2e-tests-kubectl-wph9v --since=1s' Dec 19 11:26:21.951: INFO: stderr: "" Dec 19 11:26:21.951: INFO: stdout: "" Dec 19 11:26:21.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4zjw5 redis-master --namespace=e2e-tests-kubectl-wph9v --since=24h' Dec 19 11:26:22.110: INFO: stderr: "" Dec 19 11:26:22.110: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 Dec 11:26:17.075 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Dec 11:26:17.075 # Server started, Redis version 3.2.12\n1:M 19 Dec 11:26:17.076 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Dec 11:26:17.076 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Dec 19 11:26:22.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wph9v' Dec 19 11:26:22.341: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:26:22.341: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 19 11:26:22.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-wph9v' Dec 19 11:26:22.632: INFO: stderr: "No resources found.\n" Dec 19 11:26:22.632: INFO: stdout: "" Dec 19 11:26:22.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-wph9v -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 19 11:26:22.796: INFO: stderr: "" Dec 19 11:26:22.796: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:26:22.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wph9v" for this suite. Dec 19 11:26:46.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:26:47.065: INFO: namespace: e2e-tests-kubectl-wph9v, resource: bindings, ignored listing per whitelist Dec 19 11:26:47.157: INFO: namespace e2e-tests-kubectl-wph9v deletion completed in 24.332244214s • [SLOW TEST:40.163 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:26:47.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 19 11:26:57.640: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-715dafd9-2252-11ea-a3c6-0242ac110004,GenerateName:,Namespace:e2e-tests-events-jtp28,SelfLink:/api/v1/namespaces/e2e-tests-events-jtp28/pods/send-events-715dafd9-2252-11ea-a3c6-0242ac110004,UID:7166c6c4-2252-11ea-a994-fa163e34d433,ResourceVersion:15338366,Generation:0,CreationTimestamp:2019-12-19 11:26:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 284948484,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xg4rs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xg4rs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-xg4rs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002019510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002019530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:26:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:26:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:26:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:26:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-19 11:26:47 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-19 11:26:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://9eefcfebae21fd6783d4e8d6dbfc0f8733e25f24ba721309b09c90990eb482b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Dec 19 11:26:59.662: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 19 11:27:01.691: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:27:01.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-jtp28" for this suite. Dec 19 11:27:43.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:27:44.345: INFO: namespace: e2e-tests-events-jtp28, resource: bindings, ignored listing per whitelist Dec 19 11:27:44.353: INFO: namespace e2e-tests-events-jtp28 deletion completed in 42.596717659s • [SLOW TEST:57.196 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:27:44.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:27:44.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-slscf" to be "success or failure" Dec 19 11:27:44.613: INFO: Pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.713577ms Dec 19 11:27:46.647: INFO: Pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055613321s Dec 19 11:27:48.664: INFO: Pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072643871s Dec 19 11:27:50.677: INFO: Pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085938424s Dec 19 11:27:53.575: INFO: Pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.983367259s Dec 19 11:27:55.597: INFO: Pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.005375572s STEP: Saw pod success Dec 19 11:27:55.597: INFO: Pod "downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:27:55.603: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:27:56.507: INFO: Waiting for pod downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004 to disappear Dec 19 11:27:56.527: INFO: Pod downwardapi-volume-9383b22e-2252-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:27:56.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-slscf" for this suite. Dec 19 11:28:04.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:28:05.033: INFO: namespace: e2e-tests-projected-slscf, resource: bindings, ignored listing per whitelist Dec 19 11:28:05.035: INFO: namespace e2e-tests-projected-slscf deletion completed in 8.401455087s • [SLOW TEST:20.682 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:28:05.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:28:05.242: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:28:15.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qzld9" for this suite. Dec 19 11:29:05.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:29:05.884: INFO: namespace: e2e-tests-pods-qzld9, resource: bindings, ignored listing per whitelist Dec 19 11:29:06.021: INFO: namespace e2e-tests-pods-qzld9 deletion completed in 50.22830985s • [SLOW TEST:60.986 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:29:06.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:29:06.264: INFO: Creating deployment "nginx-deployment" Dec 19 11:29:06.277: INFO: Waiting for observed generation 1 Dec 19 11:29:10.473: INFO: Waiting for all required pods to come up Dec 19 11:29:11.374: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 19 11:29:49.419: INFO: Waiting for deployment "nginx-deployment" to complete Dec 19 11:29:49.433: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 19 11:29:49.464: INFO: Updating deployment nginx-deployment Dec 19 11:29:49.464: INFO: Waiting for observed generation 2 Dec 19 11:29:52.107: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 19 11:29:52.125: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 19 11:29:52.145: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 19 11:29:54.490: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 19 11:29:54.490: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 19 11:29:54.511: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 19 11:29:54.530: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 19 11:29:54.530: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 19 11:29:54.717: INFO: Updating deployment nginx-deployment Dec 19 11:29:54.717: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 19 11:29:56.165: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 19 11:30:00.248: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 19 11:30:00.622: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-49rm5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-49rm5/deployments/nginx-deployment,UID:c43475e3-2252-11ea-a994-fa163e34d433,ResourceVersion:15338809,Generation:3,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-19 11:29:50 +0000 UTC 2019-12-19 11:29:06 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-19 11:29:58 +0000 UTC 2019-12-19 11:29:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 19 11:30:00.698: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-49rm5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-49rm5/replicasets/nginx-deployment-5c98f8fb5,UID:ddf4f8cd-2252-11ea-a994-fa163e34d433,ResourceVersion:15338803,Generation:3,CreationTimestamp:2019-12-19 11:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c43475e3-2252-11ea-a994-fa163e34d433 0xc0020cdc27 0xc0020cdc28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 19 11:30:00.698: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 19 11:30:00.699: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-49rm5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-49rm5/replicasets/nginx-deployment-85ddf47c5d,UID:c43a1cf1-2252-11ea-a994-fa163e34d433,ResourceVersion:15338800,Generation:3,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c43475e3-2252-11ea-a994-fa163e34d433 0xc0020cdda7 0xc0020cdda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 19 11:30:01.385: INFO: Pod "nginx-deployment-5c98f8fb5-48jl2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-48jl2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-48jl2,UID:de73f185-2252-11ea-a994-fa163e34d433,ResourceVersion:15338808,Generation:0,CreationTimestamp:2019-12-19 11:29:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135c897 0xc00135c898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135c900} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135c920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-19 11:29:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.386: INFO: Pod "nginx-deployment-5c98f8fb5-6pwnx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6pwnx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-6pwnx,UID:de0ce787-2252-11ea-a994-fa163e34d433,ResourceVersion:15338795,Generation:0,CreationTimestamp:2019-12-19 11:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135c9f7 0xc00135c9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135ca60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135ca80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-19 11:29:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.386: INFO: Pod "nginx-deployment-5c98f8fb5-cwzx8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cwzx8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-cwzx8,UID:e462345a-2252-11ea-a994-fa163e34d433,ResourceVersion:15338825,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135cb47 0xc00135cb48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135cbb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135cbd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:30:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.386: INFO: Pod "nginx-deployment-5c98f8fb5-frnhs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-frnhs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-frnhs,UID:de077bbc-2252-11ea-a994-fa163e34d433,ResourceVersion:15338788,Generation:0,CreationTimestamp:2019-12-19 11:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135cc47 0xc00135cc48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135ccb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135ccd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-19 11:29:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.387: INFO: Pod "nginx-deployment-5c98f8fb5-hkp2h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hkp2h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-hkp2h,UID:de0c8843-2252-11ea-a994-fa163e34d433,ResourceVersion:15338790,Generation:0,CreationTimestamp:2019-12-19 11:29:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135cd97 0xc00135cd98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135ce00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135ce20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-19 11:29:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.387: INFO: Pod "nginx-deployment-5c98f8fb5-vgl7j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vgl7j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-vgl7j,UID:de6ce0b3-2252-11ea-a994-fa163e34d433,ResourceVersion:15338793,Generation:0,CreationTimestamp:2019-12-19 11:29:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135cee7 0xc00135cee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135cf60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135cf80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-19 11:29:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.388: INFO: Pod "nginx-deployment-5c98f8fb5-vvmt7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vvmt7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-vvmt7,UID:e49f7e17-2252-11ea-a994-fa163e34d433,ResourceVersion:15338823,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135d077 0xc00135d078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135d0f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135d120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.388: INFO: Pod "nginx-deployment-5c98f8fb5-zxjcf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zxjcf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-5c98f8fb5-zxjcf,UID:e49f4196-2252-11ea-a994-fa163e34d433,ResourceVersion:15338818,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 ddf4f8cd-2252-11ea-a994-fa163e34d433 0xc00135d180 0xc00135d181}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135d1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135d220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.388: INFO: Pod "nginx-deployment-85ddf47c5d-24jkq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-24jkq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-24jkq,UID:c45f4a26-2252-11ea-a994-fa163e34d433,ResourceVersion:15338704,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc00135d2b0 0xc00135d2b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135d310} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135d330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-19 11:29:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://de86fa170e960fb012f218056c0872a002b1fb8ef4e363d85afff53075ace0a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.389: INFO: Pod "nginx-deployment-85ddf47c5d-27b9t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-27b9t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-27b9t,UID:e49f48c7-2252-11ea-a994-fa163e34d433,ResourceVersion:15338820,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc00135d3f7 0xc00135d3f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135d470} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135d490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.389: INFO: Pod "nginx-deployment-85ddf47c5d-8zfg4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8zfg4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-8zfg4,UID:e46fad08-2252-11ea-a994-fa163e34d433,ResourceVersion:15338824,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc00135d4f0 0xc00135d4f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135d550} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135d570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:30:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.389: INFO: Pod "nginx-deployment-85ddf47c5d-hgr45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hgr45,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-hgr45,UID:e49e8a4b-2252-11ea-a994-fa163e34d433,ResourceVersion:15338826,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc00135d7f7 0xc00135d7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00135d860} {node.kubernetes.io/unreachable Exists NoExecute 0xc00135d880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.390: INFO: Pod "nginx-deployment-85ddf47c5d-klmzp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-klmzp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-klmzp,UID:c4577791-2252-11ea-a994-fa163e34d433,ResourceVersion:15338720,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc00135d8e0 0xc00135d8e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001792540} {node.kubernetes.io/unreachable Exists NoExecute 0xc001792630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-19 11:29:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://06a250f8c175857a5bc1e9fe51226b402736c3e127f9c53b4b9578c8e92372d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.390: INFO: Pod "nginx-deployment-85ddf47c5d-kp7w9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kp7w9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-kp7w9,UID:e49edb75-2252-11ea-a994-fa163e34d433,ResourceVersion:15338827,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc0017926f7 0xc0017926f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001792760} {node.kubernetes.io/unreachable Exists NoExecute 0xc001792830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.391: INFO: Pod "nginx-deployment-85ddf47c5d-qrprw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qrprw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-qrprw,UID:c45ede9f-2252-11ea-a994-fa163e34d433,ResourceVersion:15338737,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001792890 0xc001792891}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017928f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001792910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-19 11:29:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a0bfafc13c6c17c7cd0f5a46008345f8ec1f5ed5d54aad4512d3967259b816f3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.391: INFO: Pod "nginx-deployment-85ddf47c5d-r6lnq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r6lnq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-r6lnq,UID:c4773158-2252-11ea-a994-fa163e34d433,ResourceVersion:15338711,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001792a47 0xc001792a48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001792ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001792ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-19 11:29:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://66ea20ce720e76329ac91e304a328000c18210e98f252088867a422708fc02c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.392: INFO: Pod "nginx-deployment-85ddf47c5d-spln7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-spln7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-spln7,UID:e49f0857-2252-11ea-a994-fa163e34d433,ResourceVersion:15338819,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001792c87 0xc001792c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001792cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001792d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.392: INFO: Pod "nginx-deployment-85ddf47c5d-t64dm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t64dm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-t64dm,UID:e46f414b-2252-11ea-a994-fa163e34d433,ResourceVersion:15338822,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001792de0 0xc001792de1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001792e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001792e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:30:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.393: INFO: Pod "nginx-deployment-85ddf47c5d-txxlv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-txxlv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-txxlv,UID:c476090d-2252-11ea-a994-fa163e34d433,ResourceVersion:15338717,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001792ee7 0xc001792ee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001792f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001792f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-19 11:29:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://021e9a245700f1ad1a0e114ddc91312ed26d456f32e1a5645e517593afeb8655}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.393: INFO: Pod "nginx-deployment-85ddf47c5d-wls9n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wls9n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-wls9n,UID:c4769493-2252-11ea-a994-fa163e34d433,ResourceVersion:15338705,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc0017930b7 0xc0017930b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001793120} {node.kubernetes.io/unreachable Exists NoExecute 0xc001793140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-19 11:29:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5103b177370c1abd89a11ad06fff26cf8d726ea56212b17612e7321a265e2127}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.394: INFO: Pod "nginx-deployment-85ddf47c5d-wmrc9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wmrc9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-wmrc9,UID:c4515e19-2252-11ea-a994-fa163e34d433,ResourceVersion:15338714,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001793217 0xc001793218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001793320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001793340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-19 11:29:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b5d301ed5b1afd64863371ee66f0c96ba84cb1e08e3a61f0a2265605dd85c349}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.394: INFO: Pod "nginx-deployment-85ddf47c5d-xfchz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xfchz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-xfchz,UID:e4627ae9-2252-11ea-a994-fa163e34d433,ResourceVersion:15338814,Generation:0,CreationTimestamp:2019-12-19 11:30:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001793627 0xc001793628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001793690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017936b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:30:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 19 11:30:01.395: INFO: Pod "nginx-deployment-85ddf47c5d-xw84t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xw84t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-49rm5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-49rm5/pods/nginx-deployment-85ddf47c5d-xw84t,UID:c45f037b-2252-11ea-a994-fa163e34d433,ResourceVersion:15338733,Generation:0,CreationTimestamp:2019-12-19 11:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c43a1cf1-2252-11ea-a994-fa163e34d433 0xc001793727 0xc001793728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-864wc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-864wc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-864wc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001793790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017937b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 11:29:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-19 11:29:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 11:29:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fdaa3e96a833aeeb15e51dadc1c3777d0389f7511c8a2de86a3d05646d6fcc43}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:30:01.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-49rm5" for this suite. Dec 19 11:31:01.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:31:03.349: INFO: namespace: e2e-tests-deployment-49rm5, resource: bindings, ignored listing per whitelist Dec 19 11:31:03.397: INFO: namespace e2e-tests-deployment-49rm5 deletion completed in 1m1.436505273s • [SLOW TEST:117.375 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:31:03.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-0abe1dbe-2253-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume secrets Dec 19 11:31:05.044: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-rgb2z" to be "success or failure" Dec 19 11:31:05.111: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 67.023221ms Dec 19 11:31:07.312: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268213714s Dec 19 11:31:09.899: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.85526336s Dec 19 11:31:11.917: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.873100347s Dec 19 11:31:14.158: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.114416587s Dec 19 11:31:16.306: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.262585534s Dec 19 11:31:18.340: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.296418381s Dec 19 11:31:20.376: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.332052267s Dec 19 11:31:22.690: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.646604306s Dec 19 11:31:24.702: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.658195263s Dec 19 11:31:26.720: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.676545039s STEP: Saw pod success Dec 19 11:31:26.720: INFO: Pod "pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:31:26.729: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 19 11:31:27.151: INFO: Waiting for pod pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004 to disappear Dec 19 11:31:27.162: INFO: Pod pod-projected-secrets-0abfb859-2253-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:31:27.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rgb2z" for this suite. Dec 19 11:31:33.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:31:33.465: INFO: namespace: e2e-tests-projected-rgb2z, resource: bindings, ignored listing per whitelist Dec 19 11:31:33.551: INFO: namespace e2e-tests-projected-rgb2z deletion completed in 6.336018376s • [SLOW TEST:30.152 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:31:33.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 19 11:31:33.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:34.130: INFO: stderr: "" Dec 19 11:31:34.130: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 19 11:31:34.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:34.305: INFO: stderr: "" Dec 19 11:31:34.305: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-mpk27 " Dec 19 11:31:34.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:34.409: INFO: stderr: "" Dec 19 11:31:34.409: INFO: stdout: "" Dec 19 11:31:34.409: INFO: update-demo-nautilus-2np86 is created but not running Dec 19 11:31:39.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:39.567: INFO: stderr: "" Dec 19 11:31:39.567: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-mpk27 " Dec 19 11:31:39.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:39.698: INFO: stderr: "" Dec 19 11:31:39.699: INFO: stdout: "" Dec 19 11:31:39.699: INFO: update-demo-nautilus-2np86 is created but not running Dec 19 11:31:44.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:45.371: INFO: stderr: "" Dec 19 11:31:45.372: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-mpk27 " Dec 19 11:31:45.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:45.525: INFO: stderr: "" Dec 19 11:31:45.525: INFO: stdout: "" Dec 19 11:31:45.525: INFO: update-demo-nautilus-2np86 is created but not running Dec 19 11:31:50.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:50.691: INFO: stderr: "" Dec 19 11:31:50.691: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-mpk27 " Dec 19 11:31:50.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:50.826: INFO: stderr: "" Dec 19 11:31:50.826: INFO: stdout: "true" Dec 19 11:31:50.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:50.921: INFO: stderr: "" Dec 19 11:31:50.921: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:31:50.921: INFO: validating pod update-demo-nautilus-2np86 Dec 19 11:31:50.932: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:31:50.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:31:50.933: INFO: update-demo-nautilus-2np86 is verified up and running Dec 19 11:31:50.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpk27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:51.018: INFO: stderr: "" Dec 19 11:31:51.018: INFO: stdout: "true" Dec 19 11:31:51.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpk27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:51.108: INFO: stderr: "" Dec 19 11:31:51.108: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:31:51.108: INFO: validating pod update-demo-nautilus-mpk27 Dec 19 11:31:51.123: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:31:51.123: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:31:51.123: INFO: update-demo-nautilus-mpk27 is verified up and running STEP: scaling down the replication controller Dec 19 11:31:51.126: INFO: scanned /root for discovery docs: Dec 19 11:31:51.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:52.356: INFO: stderr: "" Dec 19 11:31:52.356: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 19 11:31:52.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:52.578: INFO: stderr: "" Dec 19 11:31:52.578: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-mpk27 " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 19 11:31:57.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:57.742: INFO: stderr: "" Dec 19 11:31:57.742: INFO: stdout: "update-demo-nautilus-2np86 " Dec 19 11:31:57.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:57.875: INFO: stderr: "" Dec 19 11:31:57.875: INFO: stdout: "true" Dec 19 11:31:57.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:31:57.984: INFO: stderr: "" Dec 19 11:31:57.984: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:31:57.984: INFO: validating pod update-demo-nautilus-2np86 Dec 19 11:31:57.991: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:31:57.991: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:31:57.991: INFO: update-demo-nautilus-2np86 is verified up and running STEP: scaling up the replication controller Dec 19 11:31:57.993: INFO: scanned /root for discovery docs: Dec 19 11:31:57.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:00.415: INFO: stderr: "" Dec 19 11:32:00.415: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 19 11:32:00.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:00.766: INFO: stderr: "" Dec 19 11:32:00.766: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-k4tz6 " Dec 19 11:32:00.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:00.941: INFO: stderr: "" Dec 19 11:32:00.941: INFO: stdout: "true" Dec 19 11:32:00.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:01.512: INFO: stderr: "" Dec 19 11:32:01.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:32:01.512: INFO: validating pod update-demo-nautilus-2np86 Dec 19 11:32:01.533: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:32:01.533: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:32:01.533: INFO: update-demo-nautilus-2np86 is verified up and running Dec 19 11:32:01.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4tz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:01.676: INFO: stderr: "" Dec 19 11:32:01.676: INFO: stdout: "" Dec 19 11:32:01.676: INFO: update-demo-nautilus-k4tz6 is created but not running Dec 19 11:32:06.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:06.826: INFO: stderr: "" Dec 19 11:32:06.826: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-k4tz6 " Dec 19 11:32:06.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:06.950: INFO: stderr: "" Dec 19 11:32:06.950: INFO: stdout: "true" Dec 19 11:32:06.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:07.087: INFO: stderr: "" Dec 19 11:32:07.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:32:07.087: INFO: validating pod update-demo-nautilus-2np86 Dec 19 11:32:07.098: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:32:07.098: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:32:07.098: INFO: update-demo-nautilus-2np86 is verified up and running Dec 19 11:32:07.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4tz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:07.228: INFO: stderr: "" Dec 19 11:32:07.228: INFO: stdout: "" Dec 19 11:32:07.228: INFO: update-demo-nautilus-k4tz6 is created but not running Dec 19 11:32:12.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:12.370: INFO: stderr: "" Dec 19 11:32:12.370: INFO: stdout: "update-demo-nautilus-2np86 update-demo-nautilus-k4tz6 " Dec 19 11:32:12.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:12.477: INFO: stderr: "" Dec 19 11:32:12.477: INFO: stdout: "true" Dec 19 11:32:12.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2np86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:12.611: INFO: stderr: "" Dec 19 11:32:12.611: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:32:12.611: INFO: validating pod update-demo-nautilus-2np86 Dec 19 11:32:12.638: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:32:12.638: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:32:12.638: INFO: update-demo-nautilus-2np86 is verified up and running Dec 19 11:32:12.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4tz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:12.728: INFO: stderr: "" Dec 19 11:32:12.728: INFO: stdout: "true" Dec 19 11:32:12.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4tz6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:12.819: INFO: stderr: "" Dec 19 11:32:12.819: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 19 11:32:12.819: INFO: validating pod update-demo-nautilus-k4tz6 Dec 19 11:32:12.835: INFO: got data: { "image": "nautilus.jpg" } Dec 19 11:32:12.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 19 11:32:12.835: INFO: update-demo-nautilus-k4tz6 is verified up and running STEP: using delete to clean up resources Dec 19 11:32:12.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:13.150: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:32:13.151: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 19 11:32:13.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-s9chd' Dec 19 11:32:13.330: INFO: stderr: "No resources found.\n" Dec 19 11:32:13.330: INFO: stdout: "" Dec 19 11:32:13.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-s9chd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 19 11:32:13.455: INFO: stderr: "" Dec 19 11:32:13.455: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:32:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s9chd" for this suite. Dec 19 11:32:37.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:32:37.818: INFO: namespace: e2e-tests-kubectl-s9chd, resource: bindings, ignored listing per whitelist Dec 19 11:32:37.840: INFO: namespace e2e-tests-kubectl-s9chd deletion completed in 24.36621393s • [SLOW TEST:64.289 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:32:37.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 19 11:35:41.773: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:41.817: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:43.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:43.847: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:45.817: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:45.837: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:47.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:47.874: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:49.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:49.854: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:51.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:51.839: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:53.817: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:53.843: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:55.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:55.841: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:57.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:57.845: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:35:59.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:35:59.850: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:36:01.817: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:36:01.844: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:36:03.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:36:03.839: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:36:05.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:36:05.852: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:36:07.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:36:07.840: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:36:09.817: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:36:09.837: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:36:11.817: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:36:11.832: INFO: Pod pod-with-poststart-exec-hook still exists Dec 19 11:36:13.818: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 19 11:36:13.845: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:36:13.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jqx2h" for this suite. Dec 19 11:36:37.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:36:38.232: INFO: namespace: e2e-tests-container-lifecycle-hook-jqx2h, resource: bindings, ignored listing per whitelist Dec 19 11:36:38.240: INFO: namespace e2e-tests-container-lifecycle-hook-jqx2h deletion completed in 24.377838983s • [SLOW TEST:240.400 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:36:38.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:36:50.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-lsz7j" for this suite. Dec 19 11:36:56.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:36:56.911: INFO: namespace: e2e-tests-kubelet-test-lsz7j, resource: bindings, ignored listing per whitelist Dec 19 11:36:56.997: INFO: namespace e2e-tests-kubelet-test-lsz7j deletion completed in 6.403392215s • [SLOW TEST:18.756 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:36:56.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 19 11:37:19.445: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:19.494: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:21.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:21.531: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:23.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:23.514: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:25.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:25.513: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:27.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:27.509: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:29.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:29.510: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:31.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:31.511: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:33.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:33.512: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:35.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:35.558: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:37.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:37.969: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:39.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:39.889: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:41.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:41.572: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:43.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:43.514: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:45.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:45.511: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:47.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:47.523: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:49.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:49.513: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:51.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:51.520: INFO: Pod pod-with-prestop-exec-hook still exists Dec 19 11:37:53.495: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 19 11:37:53.514: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:37:53.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-kf5wl" for this suite. Dec 19 11:38:17.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:38:17.738: INFO: namespace: e2e-tests-container-lifecycle-hook-kf5wl, resource: bindings, ignored listing per whitelist Dec 19 11:38:17.795: INFO: namespace e2e-tests-container-lifecycle-hook-kf5wl deletion completed in 24.229877612s • [SLOW TEST:80.799 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:38:17.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-0d61b6d6-2254-11ea-a3c6-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-0d61b965-2254-11ea-a3c6-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0d61b6d6-2254-11ea-a3c6-0242ac110004 STEP: Updating configmap cm-test-opt-upd-0d61b965-2254-11ea-a3c6-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-0d61ba47-2254-11ea-a3c6-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:40:06.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zxhg7" for this suite. Dec 19 11:40:30.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:40:30.935: INFO: namespace: e2e-tests-configmap-zxhg7, resource: bindings, ignored listing per whitelist Dec 19 11:40:30.938: INFO: namespace e2e-tests-configmap-zxhg7 deletion completed in 24.270055048s • [SLOW TEST:133.142 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:40:30.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Dec 19 11:40:31.182: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix464592060/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:40:31.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6cr8k" for this suite. Dec 19 11:40:37.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:40:37.563: INFO: namespace: e2e-tests-kubectl-6cr8k, resource: bindings, ignored listing per whitelist Dec 19 11:40:37.610: INFO: namespace e2e-tests-kubectl-6cr8k deletion completed in 6.327135176s • [SLOW TEST:6.672 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:40:37.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-ldlz STEP: Creating a pod to test atomic-volume-subpath Dec 19 11:40:38.042: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ldlz" in namespace "e2e-tests-subpath-tzrvf" to be "success or failure" Dec 19 11:40:38.064: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 21.208148ms Dec 19 11:40:40.078: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035694051s Dec 19 11:40:42.097: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054917391s Dec 19 11:40:44.139: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096098953s Dec 19 11:40:46.162: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119120627s Dec 19 11:40:48.196: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153935503s Dec 19 11:40:50.277: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.235016938s Dec 19 11:40:52.287: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.244717667s Dec 19 11:40:54.792: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.74986579s Dec 19 11:40:56.806: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 18.764032385s Dec 19 11:40:58.820: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 20.777093065s Dec 19 11:41:00.842: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 22.799229707s Dec 19 11:41:02.879: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 24.836136688s Dec 19 11:41:04.901: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 26.858108551s Dec 19 11:41:06.917: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 28.875040059s Dec 19 11:41:08.949: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 30.906138348s Dec 19 11:41:11.019: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 32.976053753s Dec 19 11:41:13.042: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Running", Reason="", readiness=false. Elapsed: 34.999300615s Dec 19 11:41:15.197: INFO: Pod "pod-subpath-test-downwardapi-ldlz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.155034536s STEP: Saw pod success Dec 19 11:41:15.198: INFO: Pod "pod-subpath-test-downwardapi-ldlz" satisfied condition "success or failure" Dec 19 11:41:15.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-ldlz container test-container-subpath-downwardapi-ldlz: STEP: delete the pod Dec 19 11:41:15.455: INFO: Waiting for pod pod-subpath-test-downwardapi-ldlz to disappear Dec 19 11:41:15.467: INFO: Pod pod-subpath-test-downwardapi-ldlz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ldlz Dec 19 11:41:15.468: INFO: Deleting pod "pod-subpath-test-downwardapi-ldlz" in namespace "e2e-tests-subpath-tzrvf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:41:15.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-tzrvf" for this suite. Dec 19 11:41:21.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:41:21.761: INFO: namespace: e2e-tests-subpath-tzrvf, resource: bindings, ignored listing per whitelist Dec 19 11:41:21.766: INFO: namespace e2e-tests-subpath-tzrvf deletion completed in 6.284456011s • [SLOW TEST:44.156 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:41:21.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 19 11:41:22.708: INFO: Waiting up to 5m0s for pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh" in namespace "e2e-tests-svcaccounts-8tmqf" to be "success or failure" Dec 19 11:41:22.732: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 23.147159ms Dec 19 11:41:24.742: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032921559s Dec 19 11:41:26.760: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050865415s Dec 19 11:41:28.779: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070140332s Dec 19 11:41:30.850: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141167967s Dec 19 11:41:32.903: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193870451s Dec 19 11:41:35.041: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.332391527s Dec 19 11:41:37.100: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.391264909s Dec 19 11:41:39.414: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.704964586s Dec 19 11:41:41.476: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Pending", Reason="", readiness=false. Elapsed: 18.767178908s Dec 19 11:41:43.494: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.785347217s STEP: Saw pod success Dec 19 11:41:43.494: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh" satisfied condition "success or failure" Dec 19 11:41:43.506: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh container token-test: STEP: delete the pod Dec 19 11:41:43.716: INFO: Waiting for pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh to disappear Dec 19 11:41:43.738: INFO: Pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-mmxxh no longer exists STEP: Creating a pod to test consume service account root CA Dec 19 11:41:43.761: INFO: Waiting up to 5m0s for pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl" in namespace "e2e-tests-svcaccounts-8tmqf" to be "success or failure" Dec 19 11:41:43.919: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 157.485806ms Dec 19 11:41:45.936: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174685521s Dec 19 11:41:47.970: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209071206s Dec 19 11:41:50.395: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634261558s Dec 19 11:41:52.413: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.65208928s Dec 19 11:41:54.441: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.680297694s Dec 19 11:41:56.929: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.168397053s Dec 19 11:41:58.945: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.183739334s Dec 19 11:42:01.003: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.242354639s STEP: Saw pod success Dec 19 11:42:01.004: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl" satisfied condition "success or failure" Dec 19 11:42:01.018: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl container root-ca-test: STEP: delete the pod Dec 19 11:42:01.203: INFO: Waiting for pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl to disappear Dec 19 11:42:01.217: INFO: Pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-khvzl no longer exists STEP: Creating a pod to test consume service account namespace Dec 19 11:42:01.245: INFO: Waiting up to 5m0s for pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm" in namespace "e2e-tests-svcaccounts-8tmqf" to be "success or failure" Dec 19 11:42:01.255: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.594946ms Dec 19 11:42:03.647: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401955983s Dec 19 11:42:05.676: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431418443s Dec 19 11:42:08.412: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 7.166894783s Dec 19 11:42:11.198: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.952865329s Dec 19 11:42:13.213: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 11.96800127s Dec 19 11:42:15.232: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 13.987075833s Dec 19 11:42:17.255: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009869823s Dec 19 11:42:19.292: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.047084048s STEP: Saw pod success Dec 19 11:42:19.292: INFO: Pod "pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm" satisfied condition "success or failure" Dec 19 11:42:19.324: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm container namespace-test: STEP: delete the pod Dec 19 11:42:20.245: INFO: Waiting for pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm to disappear Dec 19 11:42:20.285: INFO: Pod pod-service-account-7b143868-2254-11ea-a3c6-0242ac110004-zs6wm no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:42:20.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-8tmqf" for this suite. Dec 19 11:42:28.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:42:28.672: INFO: namespace: e2e-tests-svcaccounts-8tmqf, resource: bindings, ignored listing per whitelist Dec 19 11:42:28.769: INFO: namespace e2e-tests-svcaccounts-8tmqf deletion completed in 8.402107821s • [SLOW TEST:67.002 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:42:28.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Dec 19 11:42:29.289: INFO: Waiting up to 5m0s for pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004" in namespace "e2e-tests-containers-qwx6m" to be "success or failure" Dec 19 11:42:29.334: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 45.275744ms Dec 19 11:42:31.379: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09000543s Dec 19 11:42:33.395: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106623655s Dec 19 11:42:35.412: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123055744s Dec 19 11:42:37.658: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369703286s Dec 19 11:42:39.671: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.38237189s Dec 19 11:42:41.683: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.394098445s STEP: Saw pod success Dec 19 11:42:41.683: INFO: Pod "client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:42:41.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:42:42.445: INFO: Waiting for pod client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004 to disappear Dec 19 11:42:42.457: INFO: Pod client-containers-a2d36e6c-2254-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:42:42.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-qwx6m" for this suite. Dec 19 11:42:48.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:42:48.638: INFO: namespace: e2e-tests-containers-qwx6m, resource: bindings, ignored listing per whitelist Dec 19 11:42:48.711: INFO: namespace e2e-tests-containers-qwx6m deletion completed in 6.244656576s • [SLOW TEST:19.942 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:42:48.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:42:48.980: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 19 11:42:49.138: INFO: Number of nodes with available pods: 0 Dec 19 11:42:49.139: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:50.163: INFO: Number of nodes with available pods: 0 Dec 19 11:42:50.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:51.161: INFO: Number of nodes with available pods: 0 Dec 19 11:42:51.161: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:52.192: INFO: Number of nodes with available pods: 0 Dec 19 11:42:52.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:53.234: INFO: Number of nodes with available pods: 0 Dec 19 11:42:53.234: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:54.164: INFO: Number of nodes with available pods: 0 Dec 19 11:42:54.164: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:55.182: INFO: Number of nodes with available pods: 0 Dec 19 11:42:55.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:56.183: INFO: Number of nodes with available pods: 0 Dec 19 11:42:56.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:57.159: INFO: Number of nodes with available pods: 0 Dec 19 11:42:57.159: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:58.180: INFO: Number of nodes with available pods: 0 Dec 19 11:42:58.180: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:42:59.185: INFO: Number of nodes with available pods: 1 Dec 19 11:42:59.185: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 19 11:42:59.317: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:00.344: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:01.340: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:02.371: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:03.978: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:04.445: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:05.339: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:06.438: INFO: Wrong image for pod: daemon-set-7q68w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 19 11:43:06.438: INFO: Pod daemon-set-7q68w is not available Dec 19 11:43:07.459: INFO: Pod daemon-set-jscxc is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 19 11:43:07.471: INFO: Number of nodes with available pods: 0 Dec 19 11:43:07.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:08.911: INFO: Number of nodes with available pods: 0 Dec 19 11:43:08.911: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:09.524: INFO: Number of nodes with available pods: 0 Dec 19 11:43:09.524: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:10.511: INFO: Number of nodes with available pods: 0 Dec 19 11:43:10.511: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:11.551: INFO: Number of nodes with available pods: 0 Dec 19 11:43:11.551: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:13.672: INFO: Number of nodes with available pods: 0 Dec 19 11:43:13.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:14.518: INFO: Number of nodes with available pods: 0 Dec 19 11:43:14.518: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:15.492: INFO: Number of nodes with available pods: 0 Dec 19 11:43:15.492: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:43:16.528: INFO: Number of nodes with available pods: 1 Dec 19 11:43:16.528: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-l7s2h, will wait for the garbage collector to delete the pods Dec 19 11:43:16.665: INFO: Deleting DaemonSet.extensions daemon-set took: 20.954646ms Dec 19 11:43:16.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.642728ms Dec 19 11:43:32.673: INFO: Number of nodes with available pods: 0 Dec 19 11:43:32.673: INFO: Number of running nodes: 0, number of available pods: 0 Dec 19 11:43:32.678: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-l7s2h/daemonsets","resourceVersion":"15340430"},"items":null} Dec 19 11:43:32.682: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l7s2h/pods","resourceVersion":"15340430"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:43:32.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-l7s2h" for this suite. Dec 19 11:43:40.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:43:40.871: INFO: namespace: e2e-tests-daemonsets-l7s2h, resource: bindings, ignored listing per whitelist Dec 19 11:43:40.894: INFO: namespace e2e-tests-daemonsets-l7s2h deletion completed in 8.199388228s • [SLOW TEST:52.182 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:43:40.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:43:41.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Dec 19 11:43:41.259: INFO: stderr: "" Dec 19 11:43:41.259: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Dec 19 11:43:41.268: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:43:41.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kjnc8" for this suite. Dec 19 11:43:47.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:43:47.576: INFO: namespace: e2e-tests-kubectl-kjnc8, resource: bindings, ignored listing per whitelist Dec 19 11:43:47.592: INFO: namespace e2e-tests-kubectl-kjnc8 deletion completed in 6.306832958s S [SKIPPING] [6.698 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:43:41.268: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:43:47.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-d1a85424-2254-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume secrets Dec 19 11:43:47.919: INFO: Waiting up to 5m0s for pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-wtfqp" to be "success or failure" Dec 19 11:43:47.972: INFO: Pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 52.913671ms Dec 19 11:43:50.123: INFO: Pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203087562s Dec 19 11:43:52.159: INFO: Pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239766945s Dec 19 11:43:55.547: INFO: Pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.627606232s Dec 19 11:43:58.292: INFO: Pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.372070331s Dec 19 11:44:00.333: INFO: Pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.413536604s STEP: Saw pod success Dec 19 11:44:00.333: INFO: Pod "pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:44:00.356: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 19 11:44:00.568: INFO: Waiting for pod pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004 to disappear Dec 19 11:44:00.580: INFO: Pod pod-secrets-d1b3ecbc-2254-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:44:00.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wtfqp" for this suite. Dec 19 11:44:06.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:44:06.926: INFO: namespace: e2e-tests-secrets-wtfqp, resource: bindings, ignored listing per whitelist Dec 19 11:44:06.931: INFO: namespace e2e-tests-secrets-wtfqp deletion completed in 6.280537548s • [SLOW TEST:19.338 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:44:06.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:44:07.128: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-rtgvr" to be "success or failure" Dec 19 11:44:07.251: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 122.594899ms Dec 19 11:44:09.270: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142578725s Dec 19 11:44:11.287: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159389876s Dec 19 11:44:13.993: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.865184394s Dec 19 11:44:16.329: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.2011657s Dec 19 11:44:18.347: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.218862516s Dec 19 11:44:20.366: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.237803076s STEP: Saw pod success Dec 19 11:44:20.366: INFO: Pod "downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:44:20.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:44:20.447: INFO: Waiting for pod downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004 to disappear Dec 19 11:44:20.457: INFO: Pod downwardapi-volume-dd26b8eb-2254-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:44:20.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rtgvr" for this suite. Dec 19 11:44:28.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:44:28.659: INFO: namespace: e2e-tests-downward-api-rtgvr, resource: bindings, ignored listing per whitelist Dec 19 11:44:28.710: INFO: namespace e2e-tests-downward-api-rtgvr deletion completed in 8.23324054s • [SLOW TEST:21.779 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:44:28.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-ea2fe0c1-2254-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:44:29.026: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-69kmz" to be "success or failure" Dec 19 11:44:29.043: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.094304ms Dec 19 11:44:31.132: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105303499s Dec 19 11:44:33.147: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120168163s Dec 19 11:44:35.158: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13149655s Dec 19 11:44:37.179: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152563824s Dec 19 11:44:39.200: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.173216017s Dec 19 11:44:41.228: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.201520555s STEP: Saw pod success Dec 19 11:44:41.228: INFO: Pod "pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:44:41.234: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 19 11:44:41.341: INFO: Waiting for pod pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004 to disappear Dec 19 11:44:41.355: INFO: Pod pod-configmaps-ea310bd1-2254-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:44:41.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-69kmz" for this suite. Dec 19 11:44:47.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:44:47.629: INFO: namespace: e2e-tests-configmap-69kmz, resource: bindings, ignored listing per whitelist Dec 19 11:44:47.637: INFO: namespace e2e-tests-configmap-69kmz deletion completed in 6.270750416s • [SLOW TEST:18.926 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:44:47.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 19 11:44:47.891: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rxspf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rxspf/configmaps/e2e-watch-test-label-changed,UID:f568c77e-2254-11ea-a994-fa163e34d433,ResourceVersion:15340624,Generation:0,CreationTimestamp:2019-12-19 11:44:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 19 11:44:47.892: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rxspf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rxspf/configmaps/e2e-watch-test-label-changed,UID:f568c77e-2254-11ea-a994-fa163e34d433,ResourceVersion:15340625,Generation:0,CreationTimestamp:2019-12-19 11:44:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 19 11:44:47.892: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rxspf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rxspf/configmaps/e2e-watch-test-label-changed,UID:f568c77e-2254-11ea-a994-fa163e34d433,ResourceVersion:15340626,Generation:0,CreationTimestamp:2019-12-19 11:44:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 19 11:44:58.076: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rxspf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rxspf/configmaps/e2e-watch-test-label-changed,UID:f568c77e-2254-11ea-a994-fa163e34d433,ResourceVersion:15340640,Generation:0,CreationTimestamp:2019-12-19 11:44:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 19 11:44:58.077: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rxspf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rxspf/configmaps/e2e-watch-test-label-changed,UID:f568c77e-2254-11ea-a994-fa163e34d433,ResourceVersion:15340641,Generation:0,CreationTimestamp:2019-12-19 11:44:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 19 11:44:58.077: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rxspf,SelfLink:/api/v1/namespaces/e2e-tests-watch-rxspf/configmaps/e2e-watch-test-label-changed,UID:f568c77e-2254-11ea-a994-fa163e34d433,ResourceVersion:15340642,Generation:0,CreationTimestamp:2019-12-19 11:44:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:44:58.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-rxspf" for this suite. Dec 19 11:45:06.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:45:06.309: INFO: namespace: e2e-tests-watch-rxspf, resource: bindings, ignored listing per whitelist Dec 19 11:45:06.420: INFO: namespace e2e-tests-watch-rxspf deletion completed in 8.33382125s • [SLOW TEST:18.783 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:45:06.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Dec 19 11:45:06.807: INFO: Waiting up to 5m0s for pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-t8slb" to be "success or failure" Dec 19 11:45:06.816: INFO: Pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.381391ms Dec 19 11:45:08.841: INFO: Pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034550379s Dec 19 11:45:10.870: INFO: Pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06336003s Dec 19 11:45:12.986: INFO: Pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178691487s Dec 19 11:45:15.113: INFO: Pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306215904s Dec 19 11:45:17.153: INFO: Pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.345883161s STEP: Saw pod success Dec 19 11:45:17.153: INFO: Pod "pod-00b888d6-2255-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:45:17.160: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-00b888d6-2255-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:45:17.424: INFO: Waiting for pod pod-00b888d6-2255-11ea-a3c6-0242ac110004 to disappear Dec 19 11:45:17.526: INFO: Pod pod-00b888d6-2255-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:45:17.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-t8slb" for this suite. Dec 19 11:45:23.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:45:23.731: INFO: namespace: e2e-tests-emptydir-t8slb, resource: bindings, ignored listing per whitelist Dec 19 11:45:23.938: INFO: namespace e2e-tests-emptydir-t8slb deletion completed in 6.397431959s • [SLOW TEST:17.518 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:45:23.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Dec 19 11:45:25.035: INFO: created pod pod-service-account-defaultsa Dec 19 11:45:25.035: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 19 11:45:25.254: INFO: created pod pod-service-account-mountsa Dec 19 11:45:25.254: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 19 11:45:25.448: INFO: created pod pod-service-account-nomountsa Dec 19 11:45:25.448: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 19 11:45:25.477: INFO: created pod pod-service-account-defaultsa-mountspec Dec 19 11:45:25.477: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 19 11:45:25.564: INFO: created pod pod-service-account-mountsa-mountspec Dec 19 11:45:25.564: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 19 11:45:25.742: INFO: created pod pod-service-account-nomountsa-mountspec Dec 19 11:45:25.742: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 19 11:45:25.778: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 19 11:45:25.779: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 19 11:45:25.903: INFO: created pod pod-service-account-mountsa-nomountspec Dec 19 11:45:25.903: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 19 11:45:27.028: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 19 11:45:27.028: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:45:27.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-7d54f" for this suite. Dec 19 11:45:57.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:45:57.911: INFO: namespace: e2e-tests-svcaccounts-7d54f, resource: bindings, ignored listing per whitelist Dec 19 11:45:58.051: INFO: namespace e2e-tests-svcaccounts-7d54f deletion completed in 30.396089143s • [SLOW TEST:34.112 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:45:58.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Dec 19 11:46:10.653: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:46:37.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-5gzfh" for this suite. Dec 19 11:46:44.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:46:44.191: INFO: namespace: e2e-tests-namespaces-5gzfh, resource: bindings, ignored listing per whitelist Dec 19 11:46:44.212: INFO: namespace e2e-tests-namespaces-5gzfh deletion completed in 6.258143462s STEP: Destroying namespace "e2e-tests-nsdeletetest-xgdwf" for this suite. Dec 19 11:46:44.216: INFO: Namespace e2e-tests-nsdeletetest-xgdwf was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-d822z" for this suite. Dec 19 11:46:50.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:46:50.314: INFO: namespace: e2e-tests-nsdeletetest-d822z, resource: bindings, ignored listing per whitelist Dec 19 11:46:50.632: INFO: namespace e2e-tests-nsdeletetest-d822z deletion completed in 6.415733565s • [SLOW TEST:52.581 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:46:50.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3ed85a38-2255-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume secrets Dec 19 11:46:51.212: INFO: Waiting up to 5m0s for pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-hwkz8" to be "success or failure" Dec 19 11:46:51.254: INFO: Pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 41.624572ms Dec 19 11:46:53.271: INFO: Pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059478507s Dec 19 11:46:55.299: INFO: Pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086531833s Dec 19 11:46:57.656: INFO: Pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444281123s Dec 19 11:46:59.711: INFO: Pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499384459s Dec 19 11:47:01.725: INFO: Pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.513337214s STEP: Saw pod success Dec 19 11:47:01.725: INFO: Pod "pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:47:01.740: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 19 11:47:02.119: INFO: Waiting for pod pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004 to disappear Dec 19 11:47:02.147: INFO: Pod pod-secrets-3ed9f490-2255-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:47:02.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hwkz8" for this suite. Dec 19 11:47:08.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:47:08.950: INFO: namespace: e2e-tests-secrets-hwkz8, resource: bindings, ignored listing per whitelist Dec 19 11:47:09.375: INFO: namespace e2e-tests-secrets-hwkz8 deletion completed in 7.186492428s • [SLOW TEST:18.741 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:47:09.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-49e78dfd-2255-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume secrets Dec 19 11:47:09.590: INFO: Waiting up to 5m0s for pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-jkn97" to be "success or failure" Dec 19 11:47:09.600: INFO: Pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.692584ms Dec 19 11:47:11.616: INFO: Pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025506516s Dec 19 11:47:13.626: INFO: Pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035580011s Dec 19 11:47:15.709: INFO: Pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118620465s Dec 19 11:47:17.767: INFO: Pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176566668s Dec 19 11:47:19.783: INFO: Pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192715604s STEP: Saw pod success Dec 19 11:47:19.783: INFO: Pod "pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:47:19.790: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 19 11:47:20.084: INFO: Waiting for pod pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004 to disappear Dec 19 11:47:20.871: INFO: Pod pod-secrets-49e83325-2255-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:47:20.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jkn97" for this suite. Dec 19 11:47:27.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:47:27.680: INFO: namespace: e2e-tests-secrets-jkn97, resource: bindings, ignored listing per whitelist Dec 19 11:47:27.700: INFO: namespace e2e-tests-secrets-jkn97 deletion completed in 6.812612226s • [SLOW TEST:18.324 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:47:27.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-54cfcd57-2255-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:47:27.897: INFO: Waiting up to 5m0s for pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-fdkr7" to be "success or failure" Dec 19 11:47:27.993: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 95.888228ms Dec 19 11:47:30.078: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181467638s Dec 19 11:47:32.111: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214021739s Dec 19 11:47:34.129: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232834569s Dec 19 11:47:36.145: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.248520732s Dec 19 11:47:38.156: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.259031556s Dec 19 11:47:40.655: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.758399483s STEP: Saw pod success Dec 19 11:47:40.655: INFO: Pod "pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:47:40.671: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 19 11:47:41.458: INFO: Waiting for pod pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004 to disappear Dec 19 11:47:41.474: INFO: Pod pod-configmaps-54d17e00-2255-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:47:41.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fdkr7" for this suite. Dec 19 11:47:47.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:47:47.770: INFO: namespace: e2e-tests-configmap-fdkr7, resource: bindings, ignored listing per whitelist Dec 19 11:47:47.833: INFO: namespace e2e-tests-configmap-fdkr7 deletion completed in 6.342793149s • [SLOW TEST:20.133 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:47:47.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:47:48.415: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"60f1d206-2255-11ea-a994-fa163e34d433", Controller:(*bool)(0xc002321f12), BlockOwnerDeletion:(*bool)(0xc002321f13)}} Dec 19 11:47:48.587: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"60e9d5fe-2255-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001cc6f62), BlockOwnerDeletion:(*bool)(0xc001cc6f63)}} Dec 19 11:47:48.689: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"60eb26a3-2255-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001d4c0aa), BlockOwnerDeletion:(*bool)(0xc001d4c0ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:47:53.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4dr8f" for this suite. Dec 19 11:47:59.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:48:00.006: INFO: namespace: e2e-tests-gc-4dr8f, resource: bindings, ignored listing per whitelist Dec 19 11:48:00.114: INFO: namespace e2e-tests-gc-4dr8f deletion completed in 6.27425112s • [SLOW TEST:12.280 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:48:00.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 19 11:48:00.689: INFO: Number of nodes with available pods: 0 Dec 19 11:48:00.689: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:02.263: INFO: Number of nodes with available pods: 0 Dec 19 11:48:02.263: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:02.772: INFO: Number of nodes with available pods: 0 Dec 19 11:48:02.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:03.714: INFO: Number of nodes with available pods: 0 Dec 19 11:48:03.714: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:04.709: INFO: Number of nodes with available pods: 0 Dec 19 11:48:04.709: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:06.256: INFO: Number of nodes with available pods: 0 Dec 19 11:48:06.256: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:07.004: INFO: Number of nodes with available pods: 0 Dec 19 11:48:07.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:07.716: INFO: Number of nodes with available pods: 0 Dec 19 11:48:07.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:08.701: INFO: Number of nodes with available pods: 0 Dec 19 11:48:08.701: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:09.818: INFO: Number of nodes with available pods: 1 Dec 19 11:48:09.818: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 19 11:48:09.875: INFO: Number of nodes with available pods: 0 Dec 19 11:48:09.875: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:10.905: INFO: Number of nodes with available pods: 0 Dec 19 11:48:10.905: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:11.955: INFO: Number of nodes with available pods: 0 Dec 19 11:48:11.955: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:12.899: INFO: Number of nodes with available pods: 0 Dec 19 11:48:12.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:13.917: INFO: Number of nodes with available pods: 0 Dec 19 11:48:13.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:14.910: INFO: Number of nodes with available pods: 0 Dec 19 11:48:14.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:15.908: INFO: Number of nodes with available pods: 0 Dec 19 11:48:15.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:16.908: INFO: Number of nodes with available pods: 0 Dec 19 11:48:16.909: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:17.919: INFO: Number of nodes with available pods: 0 Dec 19 11:48:17.919: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:18.923: INFO: Number of nodes with available pods: 0 Dec 19 11:48:18.923: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:19.937: INFO: Number of nodes with available pods: 0 Dec 19 11:48:19.937: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:20.906: INFO: Number of nodes with available pods: 0 Dec 19 11:48:20.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:22.484: INFO: Number of nodes with available pods: 0 Dec 19 11:48:22.484: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:23.245: INFO: Number of nodes with available pods: 0 Dec 19 11:48:23.245: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:24.207: INFO: Number of nodes with available pods: 0 Dec 19 11:48:24.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:24.957: INFO: Number of nodes with available pods: 0 Dec 19 11:48:24.957: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:25.911: INFO: Number of nodes with available pods: 0 Dec 19 11:48:25.911: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 19 11:48:26.896: INFO: Number of nodes with available pods: 1 Dec 19 11:48:26.896: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bzlgq, will wait for the garbage collector to delete the pods Dec 19 11:48:26.995: INFO: Deleting DaemonSet.extensions daemon-set took: 41.026799ms Dec 19 11:48:27.195: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.497715ms Dec 19 11:48:42.940: INFO: Number of nodes with available pods: 0 Dec 19 11:48:42.940: INFO: Number of running nodes: 0, number of available pods: 0 Dec 19 11:48:42.947: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bzlgq/daemonsets","resourceVersion":"15341238"},"items":null} Dec 19 11:48:42.950: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bzlgq/pods","resourceVersion":"15341238"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:48:42.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bzlgq" for this suite. Dec 19 11:48:48.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:48:49.009: INFO: namespace: e2e-tests-daemonsets-bzlgq, resource: bindings, ignored listing per whitelist Dec 19 11:48:49.162: INFO: namespace e2e-tests-daemonsets-bzlgq deletion completed in 6.199829459s • [SLOW TEST:49.048 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:48:49.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Dec 19 11:48:49.489: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 19 11:48:49.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:48:51.914: INFO: stderr: "" Dec 19 11:48:51.914: INFO: stdout: "service/redis-slave created\n" Dec 19 11:48:51.915: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 19 11:48:51.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:48:52.658: INFO: stderr: "" Dec 19 11:48:52.658: INFO: stdout: "service/redis-master created\n" Dec 19 11:48:52.659: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 19 11:48:52.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:48:53.360: INFO: stderr: "" Dec 19 11:48:53.360: INFO: stdout: "service/frontend created\n" Dec 19 11:48:53.361: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 19 11:48:53.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:48:53.902: INFO: stderr: "" Dec 19 11:48:53.902: INFO: stdout: "deployment.extensions/frontend created\n" Dec 19 11:48:53.903: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 19 11:48:53.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:48:54.663: INFO: stderr: "" Dec 19 11:48:54.663: INFO: stdout: "deployment.extensions/redis-master created\n" Dec 19 11:48:54.664: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 19 11:48:54.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:48:55.158: INFO: stderr: "" Dec 19 11:48:55.158: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Dec 19 11:48:55.158: INFO: Waiting for all frontend pods to be Running. Dec 19 11:49:25.211: INFO: Waiting for frontend to serve content. Dec 19 11:49:27.548: INFO: Trying to add a new entry to the guestbook. Dec 19 11:49:27.606: INFO: Verifying that added entry can be retrieved. Dec 19 11:49:28.403: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Dec 19 11:49:33.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:49:33.823: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:49:33.823: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 19 11:49:33.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:49:34.297: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:49:34.297: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 19 11:49:34.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:49:34.585: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:49:34.585: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 19 11:49:34.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:49:34.746: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:49:34.746: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 19 11:49:34.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:49:35.322: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:49:35.322: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 19 11:49:35.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zxt9q' Dec 19 11:49:35.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 19 11:49:35.502: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:49:35.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zxt9q" for this suite. Dec 19 11:50:19.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:50:19.904: INFO: namespace: e2e-tests-kubectl-zxt9q, resource: bindings, ignored listing per whitelist Dec 19 11:50:19.945: INFO: namespace e2e-tests-kubectl-zxt9q deletion completed in 44.364917228s • [SLOW TEST:90.783 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:50:19.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5v9rn;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5v9rn;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5v9rn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.80.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.80.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.80.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.80.207_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5v9rn;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5v9rn;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5v9rn.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5v9rn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.80.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.80.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.80.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.80.207_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 19 11:50:38.722: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.728: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.737: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5v9rn from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.753: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5v9rn from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.761: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5v9rn.svc from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.778: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5v9rn.svc from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.784: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.952: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.974: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.983: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.989: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:38.994: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004) Dec 19 11:50:39.001: INFO: Lookups using e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5v9rn jessie_tcp@dns-test-service.e2e-tests-dns-5v9rn jessie_udp@dns-test-service.e2e-tests-dns-5v9rn.svc jessie_tcp@dns-test-service.e2e-tests-dns-5v9rn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5v9rn.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5v9rn.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 19 11:50:44.361: INFO: DNS probes using e2e-tests-dns-5v9rn/dns-test-bba1d9a9-2255-11ea-a3c6-0242ac110004 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:50:44.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-5v9rn" for this suite. Dec 19 11:50:53.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:50:53.206: INFO: namespace: e2e-tests-dns-5v9rn, resource: bindings, ignored listing per whitelist Dec 19 11:50:53.284: INFO: namespace e2e-tests-dns-5v9rn deletion completed in 8.29147738s • [SLOW TEST:33.338 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:50:53.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Dec 19 11:50:53.554: INFO: Waiting up to 5m0s for pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004" in namespace "e2e-tests-var-expansion-bfbbr" to be "success or failure" Dec 19 11:50:53.574: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.93351ms Dec 19 11:50:55.990: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435809932s Dec 19 11:50:58.008: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453284607s Dec 19 11:51:00.021: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46670464s Dec 19 11:51:02.040: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48575403s Dec 19 11:51:04.066: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.511845019s Dec 19 11:51:06.101: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.546922797s STEP: Saw pod success Dec 19 11:51:06.102: INFO: Pod "var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:51:06.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004 container dapi-container: STEP: delete the pod Dec 19 11:51:06.451: INFO: Waiting for pod var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004 to disappear Dec 19 11:51:06.470: INFO: Pod var-expansion-cf6531b5-2255-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:51:06.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-bfbbr" for this suite. Dec 19 11:51:13.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:51:13.920: INFO: namespace: e2e-tests-var-expansion-bfbbr, resource: bindings, ignored listing per whitelist Dec 19 11:51:13.998: INFO: namespace e2e-tests-var-expansion-bfbbr deletion completed in 7.386692022s • [SLOW TEST:20.714 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:51:13.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:51:24.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zrtdj" for this suite. Dec 19 11:52:14.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:52:14.551: INFO: namespace: e2e-tests-kubelet-test-zrtdj, resource: bindings, ignored listing per whitelist Dec 19 11:52:14.643: INFO: namespace e2e-tests-kubelet-test-zrtdj deletion completed in 50.273885725s • [SLOW TEST:60.644 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:52:14.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 19 11:52:15.311: INFO: Waiting up to 5m0s for pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-m6nr9" to be "success or failure" Dec 19 11:52:15.329: INFO: Pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.354606ms Dec 19 11:52:17.467: INFO: Pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156375507s Dec 19 11:52:19.497: INFO: Pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18574221s Dec 19 11:52:21.892: INFO: Pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581153825s Dec 19 11:52:23.924: INFO: Pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.613444067s Dec 19 11:52:25.951: INFO: Pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.640107112s STEP: Saw pod success Dec 19 11:52:25.951: INFO: Pod "pod-00070ad1-2256-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:52:25.964: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-00070ad1-2256-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:52:26.041: INFO: Waiting for pod pod-00070ad1-2256-11ea-a3c6-0242ac110004 to disappear Dec 19 11:52:26.051: INFO: Pod pod-00070ad1-2256-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:52:26.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m6nr9" for this suite. Dec 19 11:52:32.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:52:32.302: INFO: namespace: e2e-tests-emptydir-m6nr9, resource: bindings, ignored listing per whitelist Dec 19 11:52:32.359: INFO: namespace e2e-tests-emptydir-m6nr9 deletion completed in 6.230976553s • [SLOW TEST:17.716 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:52:32.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-0a7750e1-2256-11ea-a3c6-0242ac110004 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:52:46.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-g2ph8" for this suite. Dec 19 11:53:11.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:53:11.161: INFO: namespace: e2e-tests-configmap-g2ph8, resource: bindings, ignored listing per whitelist Dec 19 11:53:11.252: INFO: namespace e2e-tests-configmap-g2ph8 deletion completed in 24.407871294s • [SLOW TEST:38.892 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:53:11.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 19 11:53:12.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-xclbw' Dec 19 11:53:12.403: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 19 11:53:12.403: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 19 11:53:12.430: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Dec 19 11:53:12.588: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 19 11:53:12.611: INFO: scanned /root for discovery docs: Dec 19 11:53:12.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-xclbw' Dec 19 11:53:38.790: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 19 11:53:38.790: INFO: stdout: "Created e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb\nScaling up e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 19 11:53:38.790: INFO: stdout: "Created e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb\nScaling up e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 19 11:53:38.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xclbw' Dec 19 11:53:39.907: INFO: stderr: "" Dec 19 11:53:39.907: INFO: stdout: "e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb-dc2xf e2e-test-nginx-rc-62ttp " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 19 11:53:44.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xclbw' Dec 19 11:53:45.102: INFO: stderr: "" Dec 19 11:53:45.102: INFO: stdout: "e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb-dc2xf " Dec 19 11:53:45.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb-dc2xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xclbw' Dec 19 11:53:45.196: INFO: stderr: "" Dec 19 11:53:45.196: INFO: stdout: "true" Dec 19 11:53:45.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb-dc2xf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xclbw' Dec 19 11:53:45.325: INFO: stderr: "" Dec 19 11:53:45.325: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 19 11:53:45.325: INFO: e2e-test-nginx-rc-323ebbd3be06ef2b1cfd276844f627bb-dc2xf is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Dec 19 11:53:45.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xclbw' Dec 19 11:53:45.535: INFO: stderr: "" Dec 19 11:53:45.535: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:53:45.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xclbw" for this suite. Dec 19 11:54:09.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:54:09.758: INFO: namespace: e2e-tests-kubectl-xclbw, resource: bindings, ignored listing per whitelist Dec 19 11:54:09.838: INFO: namespace e2e-tests-kubectl-xclbw deletion completed in 24.291363885s • [SLOW TEST:58.586 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:54:09.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 19 11:54:24.752: INFO: Successfully updated pod "pod-update-4491abc7-2256-11ea-a3c6-0242ac110004" STEP: verifying the updated pod is in kubernetes Dec 19 11:54:24.786: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:54:24.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-j2pr9" for this suite. Dec 19 11:54:48.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:54:49.220: INFO: namespace: e2e-tests-pods-j2pr9, resource: bindings, ignored listing per whitelist Dec 19 11:54:49.289: INFO: namespace e2e-tests-pods-j2pr9 deletion completed in 24.495198199s • [SLOW TEST:39.451 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:54:49.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:55:01.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-t6tcz" for this suite. Dec 19 11:55:43.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:55:43.974: INFO: namespace: e2e-tests-kubelet-test-t6tcz, resource: bindings, ignored listing per whitelist Dec 19 11:55:44.176: INFO: namespace e2e-tests-kubelet-test-t6tcz deletion completed in 42.471435599s • [SLOW TEST:54.886 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:55:44.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7cc77165-2256-11ea-a3c6-0242ac110004 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7cc77165-2256-11ea-a3c6-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:55:58.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l8f5n" for this suite. Dec 19 11:56:22.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:56:23.091: INFO: namespace: e2e-tests-projected-l8f5n, resource: bindings, ignored listing per whitelist Dec 19 11:56:23.155: INFO: namespace e2e-tests-projected-l8f5n deletion completed in 24.20843234s • [SLOW TEST:38.978 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:56:23.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-93fdaa77-2256-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume secrets Dec 19 11:56:23.469: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-4dfjm" to be "success or failure" Dec 19 11:56:23.484: INFO: Pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.366808ms Dec 19 11:56:25.704: INFO: Pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234700143s Dec 19 11:56:27.717: INFO: Pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248266109s Dec 19 11:56:29.998: INFO: Pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528612677s Dec 19 11:56:32.132: INFO: Pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662606476s Dec 19 11:56:34.152: INFO: Pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.683245298s STEP: Saw pod success Dec 19 11:56:34.153: INFO: Pod "pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:56:34.161: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 19 11:56:34.954: INFO: Waiting for pod pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004 to disappear Dec 19 11:56:35.073: INFO: Pod pod-projected-secrets-94018fa7-2256-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:56:35.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4dfjm" for this suite. Dec 19 11:56:41.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:56:41.264: INFO: namespace: e2e-tests-projected-4dfjm, resource: bindings, ignored listing per whitelist Dec 19 11:56:41.301: INFO: namespace e2e-tests-projected-4dfjm deletion completed in 6.216984826s • [SLOW TEST:18.146 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:56:41.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 19 11:56:41.555: INFO: Waiting up to 5m0s for pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-fwmvh" to be "success or failure" Dec 19 11:56:41.563: INFO: Pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.797546ms Dec 19 11:56:43.596: INFO: Pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040748363s Dec 19 11:56:45.684: INFO: Pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128448368s Dec 19 11:56:48.032: INFO: Pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476051551s Dec 19 11:56:50.040: INFO: Pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.484482785s Dec 19 11:56:52.342: INFO: Pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.786738435s STEP: Saw pod success Dec 19 11:56:52.342: INFO: Pod "pod-9ec5f216-2256-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:56:52.353: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9ec5f216-2256-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:56:53.270: INFO: Waiting for pod pod-9ec5f216-2256-11ea-a3c6-0242ac110004 to disappear Dec 19 11:56:53.294: INFO: Pod pod-9ec5f216-2256-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:56:53.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fwmvh" for this suite. Dec 19 11:56:59.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:56:59.593: INFO: namespace: e2e-tests-emptydir-fwmvh, resource: bindings, ignored listing per whitelist Dec 19 11:56:59.684: INFO: namespace e2e-tests-emptydir-fwmvh deletion completed in 6.380647456s • [SLOW TEST:18.383 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:56:59.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-a9c19265-2256-11ea-a3c6-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 19 11:56:59.916: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-jvhr4" to be "success or failure" Dec 19 11:56:59.955: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.13345ms Dec 19 11:57:02.025: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109446528s Dec 19 11:57:04.055: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139258501s Dec 19 11:57:06.382: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466888219s Dec 19 11:57:08.402: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486615022s Dec 19 11:57:10.427: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.511473774s Dec 19 11:57:12.445: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.529756628s STEP: Saw pod success Dec 19 11:57:12.445: INFO: Pod "pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:57:12.455: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 19 11:57:12.710: INFO: Waiting for pod pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004 to disappear Dec 19 11:57:12.738: INFO: Pod pod-projected-configmaps-a9c37ca5-2256-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:57:12.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jvhr4" for this suite. Dec 19 11:57:18.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:57:19.057: INFO: namespace: e2e-tests-projected-jvhr4, resource: bindings, ignored listing per whitelist Dec 19 11:57:19.260: INFO: namespace e2e-tests-projected-jvhr4 deletion completed in 6.505415385s • [SLOW TEST:19.575 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:57:19.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 19 11:57:41.806: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 19 11:57:41.912: INFO: Pod pod-with-prestop-http-hook still exists Dec 19 11:57:43.912: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 19 11:57:44.028: INFO: Pod pod-with-prestop-http-hook still exists Dec 19 11:57:45.912: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 19 11:57:45.946: INFO: Pod pod-with-prestop-http-hook still exists Dec 19 11:57:47.913: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 19 11:57:47.934: INFO: Pod pod-with-prestop-http-hook still exists Dec 19 11:57:49.912: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 19 11:57:49.933: INFO: Pod pod-with-prestop-http-hook still exists Dec 19 11:57:51.912: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 19 11:57:51.931: INFO: Pod pod-with-prestop-http-hook still exists Dec 19 11:57:53.912: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 19 11:57:53.944: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:57:54.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mkvq9" for this suite. Dec 19 11:58:18.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:58:18.239: INFO: namespace: e2e-tests-container-lifecycle-hook-mkvq9, resource: bindings, ignored listing per whitelist Dec 19 11:58:18.244: INFO: namespace e2e-tests-container-lifecycle-hook-mkvq9 deletion completed in 24.224643037s • [SLOW TEST:58.983 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:58:18.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 19 11:58:18.707: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-99tft" to be "success or failure" Dec 19 11:58:18.956: INFO: Pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 248.958254ms Dec 19 11:58:20.979: INFO: Pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271334536s Dec 19 11:58:22.999: INFO: Pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291601336s Dec 19 11:58:25.835: INFO: Pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.12820326s Dec 19 11:58:27.877: INFO: Pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.169508118s Dec 19 11:58:29.894: INFO: Pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.186309602s STEP: Saw pod success Dec 19 11:58:29.894: INFO: Pod "downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:58:29.899: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004 container client-container: STEP: delete the pod Dec 19 11:58:30.794: INFO: Waiting for pod downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004 to disappear Dec 19 11:58:30.800: INFO: Pod downwardapi-volume-d89f553f-2256-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:58:30.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-99tft" for this suite. Dec 19 11:58:36.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:58:37.118: INFO: namespace: e2e-tests-projected-99tft, resource: bindings, ignored listing per whitelist Dec 19 11:58:37.153: INFO: namespace e2e-tests-projected-99tft deletion completed in 6.346855941s • [SLOW TEST:18.907 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:58:37.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Dec 19 11:58:37.470: INFO: Waiting up to 5m0s for pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004" in namespace "e2e-tests-containers-dd6px" to be "success or failure" Dec 19 11:58:37.489: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.850297ms Dec 19 11:58:39.509: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039674611s Dec 19 11:58:41.530: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05977311s Dec 19 11:58:44.481: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.01075785s Dec 19 11:58:46.541: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.071583786s Dec 19 11:58:48.584: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 11.114241842s Dec 19 11:58:50.740: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.269993754s STEP: Saw pod success Dec 19 11:58:50.740: INFO: Pod "client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004" satisfied condition "success or failure" Dec 19 11:58:50.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004 container test-container: STEP: delete the pod Dec 19 11:58:50.934: INFO: Waiting for pod client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004 to disappear Dec 19 11:58:50.963: INFO: Pod client-containers-e3e953bb-2256-11ea-a3c6-0242ac110004 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:58:50.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-dd6px" for this suite. Dec 19 11:58:57.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:58:57.201: INFO: namespace: e2e-tests-containers-dd6px, resource: bindings, ignored listing per whitelist Dec 19 11:58:57.233: INFO: namespace e2e-tests-containers-dd6px deletion completed in 6.261782116s • [SLOW TEST:20.080 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:58:57.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 19 11:59:04.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-ck29j" for this suite. Dec 19 11:59:10.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:59:10.535: INFO: namespace: e2e-tests-namespaces-ck29j, resource: bindings, ignored listing per whitelist Dec 19 11:59:10.561: INFO: namespace e2e-tests-namespaces-ck29j deletion completed in 6.422067253s STEP: Destroying namespace "e2e-tests-nsdeletetest-prr59" for this suite. Dec 19 11:59:10.570: INFO: Namespace e2e-tests-nsdeletetest-prr59 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-sn928" for this suite. Dec 19 11:59:16.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 19 11:59:16.771: INFO: namespace: e2e-tests-nsdeletetest-sn928, resource: bindings, ignored listing per whitelist Dec 19 11:59:16.857: INFO: namespace e2e-tests-nsdeletetest-sn928 deletion completed in 6.287039241s • [SLOW TEST:19.623 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 19 11:59:16.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 19 11:59:17.111: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 13.315528ms)
Dec 19 11:59:17.163: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.723719ms)
Dec 19 11:59:17.173: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.677354ms)
Dec 19 11:59:17.180: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.565961ms)
Dec 19 11:59:17.188: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.261772ms)
Dec 19 11:59:17.195: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.866925ms)
Dec 19 11:59:17.200: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.471189ms)
Dec 19 11:59:17.206: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.342929ms)
Dec 19 11:59:17.211: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.923487ms)
Dec 19 11:59:17.217: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.839497ms)
Dec 19 11:59:17.222: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.179324ms)
Dec 19 11:59:17.228: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.932189ms)
Dec 19 11:59:17.233: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.216045ms)
Dec 19 11:59:17.237: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.677366ms)
Dec 19 11:59:17.244: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.501445ms)
Dec 19 11:59:17.249: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.242373ms)
Dec 19 11:59:17.254: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.18802ms)
Dec 19 11:59:17.261: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.601691ms)
Dec 19 11:59:17.267: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.568289ms)
Dec 19 11:59:17.274: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.21121ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 11:59:17.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-j99kp" for this suite.
Dec 19 11:59:23.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 11:59:23.431: INFO: namespace: e2e-tests-proxy-j99kp, resource: bindings, ignored listing per whitelist
Dec 19 11:59:23.504: INFO: namespace e2e-tests-proxy-j99kp deletion completed in 6.225103865s

• [SLOW TEST:6.646 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 11:59:23.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 19 11:59:23.725: INFO: PodSpec: initContainers in spec.initContainers
Dec 19 12:00:30.326: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ff7ed0af-2256-11ea-a3c6-0242ac110004", GenerateName:"", Namespace:"e2e-tests-init-container-6lxg2", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-6lxg2/pods/pod-init-ff7ed0af-2256-11ea-a3c6-0242ac110004", UID:"ff849f9b-2256-11ea-a994-fa163e34d433", ResourceVersion:"15342810", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712353563, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"725443579"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-d9bvv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001da4680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d9bvv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d9bvv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d9bvv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fb9228), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00258c600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fb92a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fb92c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001fb92c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001fb92cc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712353564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712353564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712353564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712353563, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00206fe60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f8c690)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f8c700)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ad73e2a0965b0216814637863de33af613831c9b2d049c3b97f9d5da11e16304"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00206fea0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00206fe80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:00:30.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-6lxg2" for this suite.
Dec 19 12:00:52.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:00:52.803: INFO: namespace: e2e-tests-init-container-6lxg2, resource: bindings, ignored listing per whitelist
Dec 19 12:00:52.829: INFO: namespace e2e-tests-init-container-6lxg2 deletion completed in 22.440832215s

• [SLOW TEST:89.324 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:00:52.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-34ce9fb8-2257-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 19 12:00:53.272: INFO: Waiting up to 5m0s for pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-xht2j" to be "success or failure"
Dec 19 12:00:53.290: INFO: Pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.460255ms
Dec 19 12:00:55.429: INFO: Pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156335015s
Dec 19 12:00:57.451: INFO: Pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178521323s
Dec 19 12:01:00.578: INFO: Pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.305054486s
Dec 19 12:01:02.611: INFO: Pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.338454146s
Dec 19 12:01:04.627: INFO: Pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.354603154s
STEP: Saw pod success
Dec 19 12:01:04.627: INFO: Pod "pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:01:04.632: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 19 12:01:04.751: INFO: Waiting for pod pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:01:04.807: INFO: Pod pod-configmaps-34d085a2-2257-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:01:04.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xht2j" for this suite.
Dec 19 12:01:10.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:01:11.103: INFO: namespace: e2e-tests-configmap-xht2j, resource: bindings, ignored listing per whitelist
Dec 19 12:01:11.123: INFO: namespace e2e-tests-configmap-xht2j deletion completed in 6.308148331s

• [SLOW TEST:18.293 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:01:11.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 19 12:01:11.337: INFO: Waiting up to 5m0s for pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-dhfz8" to be "success or failure"
Dec 19 12:01:11.363: INFO: Pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.992128ms
Dec 19 12:01:13.689: INFO: Pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351773889s
Dec 19 12:01:15.711: INFO: Pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.373619997s
Dec 19 12:01:17.737: INFO: Pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399041352s
Dec 19 12:01:19.758: INFO: Pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.420205405s
Dec 19 12:01:21.769: INFO: Pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.431604858s
STEP: Saw pod success
Dec 19 12:01:21.769: INFO: Pod "pod-3f986d5d-2257-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:01:21.777: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3f986d5d-2257-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:01:22.416: INFO: Waiting for pod pod-3f986d5d-2257-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:01:22.747: INFO: Pod pod-3f986d5d-2257-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:01:22.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dhfz8" for this suite.
Dec 19 12:01:28.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:01:29.043: INFO: namespace: e2e-tests-emptydir-dhfz8, resource: bindings, ignored listing per whitelist
Dec 19 12:01:29.066: INFO: namespace e2e-tests-emptydir-dhfz8 deletion completed in 6.308792634s

• [SLOW TEST:17.942 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:01:29.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 19 12:01:29.348: INFO: Waiting up to 5m0s for pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-2jrjs" to be "success or failure"
Dec 19 12:01:29.394: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 45.84845ms
Dec 19 12:01:31.690: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341981809s
Dec 19 12:01:33.726: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378074567s
Dec 19 12:01:36.166: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.818459503s
Dec 19 12:01:38.184: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.8363456s
Dec 19 12:01:40.199: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.851267734s
Dec 19 12:01:42.683: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.335061235s
STEP: Saw pod success
Dec 19 12:01:42.683: INFO: Pod "pod-4a5de9d2-2257-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:01:42.699: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4a5de9d2-2257-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:01:42.906: INFO: Waiting for pod pod-4a5de9d2-2257-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:01:42.938: INFO: Pod pod-4a5de9d2-2257-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:01:42.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2jrjs" for this suite.
Dec 19 12:01:49.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:01:49.182: INFO: namespace: e2e-tests-emptydir-2jrjs, resource: bindings, ignored listing per whitelist
Dec 19 12:01:49.307: INFO: namespace e2e-tests-emptydir-2jrjs deletion completed in 6.241846185s

• [SLOW TEST:20.241 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:01:49.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 19 12:01:49.531: INFO: Waiting up to 5m0s for pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-hdlbd" to be "success or failure"
Dec 19 12:01:49.546: INFO: Pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.769053ms
Dec 19 12:01:51.559: INFO: Pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027157114s
Dec 19 12:01:53.580: INFO: Pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048855465s
Dec 19 12:01:56.827: INFO: Pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.295454908s
Dec 19 12:01:58.911: INFO: Pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.379155713s
Dec 19 12:02:00.926: INFO: Pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.394933969s
STEP: Saw pod success
Dec 19 12:02:00.926: INFO: Pod "downward-api-5659e329-2257-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:02:00.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5659e329-2257-11ea-a3c6-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 19 12:02:02.226: INFO: Waiting for pod downward-api-5659e329-2257-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:02:02.238: INFO: Pod downward-api-5659e329-2257-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:02:02.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hdlbd" for this suite.
Dec 19 12:02:08.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:02:08.703: INFO: namespace: e2e-tests-downward-api-hdlbd, resource: bindings, ignored listing per whitelist
Dec 19 12:02:08.727: INFO: namespace e2e-tests-downward-api-hdlbd deletion completed in 6.417347923s

• [SLOW TEST:19.420 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:02:08.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:02:09.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 19 12:02:10.033: INFO: stderr: ""
Dec 19 12:02:10.033: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:02:10.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rdsp6" for this suite.
Dec 19 12:02:16.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:02:16.189: INFO: namespace: e2e-tests-kubectl-rdsp6, resource: bindings, ignored listing per whitelist
Dec 19 12:02:16.290: INFO: namespace e2e-tests-kubectl-rdsp6 deletion completed in 6.2352061s

• [SLOW TEST:7.562 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:02:16.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nkmgf
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 19 12:02:16.690: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 19 12:02:50.824: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-nkmgf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 12:02:50.824: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 12:02:51.203: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:02:51.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-nkmgf" for this suite.
Dec 19 12:03:15.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:03:15.416: INFO: namespace: e2e-tests-pod-network-test-nkmgf, resource: bindings, ignored listing per whitelist
Dec 19 12:03:15.423: INFO: namespace e2e-tests-pod-network-test-nkmgf deletion completed in 24.19501969s

• [SLOW TEST:59.133 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:03:15.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:03:15.601: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-c7jg9" to be "success or failure"
Dec 19 12:03:15.735: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 133.946917ms
Dec 19 12:03:17.967: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366095117s
Dec 19 12:03:20.031: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430616142s
Dec 19 12:03:22.044: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442769222s
Dec 19 12:03:24.057: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455904468s
Dec 19 12:03:26.071: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.470540867s
Dec 19 12:03:28.086: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.484884738s
Dec 19 12:03:30.107: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.506093135s
STEP: Saw pod success
Dec 19 12:03:30.107: INFO: Pod "downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:03:30.112: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:03:30.473: INFO: Waiting for pod downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:03:30.511: INFO: Pod downwardapi-volume-89b23c27-2257-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:03:30.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c7jg9" for this suite.
Dec 19 12:03:38.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:03:38.778: INFO: namespace: e2e-tests-projected-c7jg9, resource: bindings, ignored listing per whitelist
Dec 19 12:03:38.988: INFO: namespace e2e-tests-projected-c7jg9 deletion completed in 8.458221273s

• [SLOW TEST:23.563 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:03:38.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 19 12:03:39.354: INFO: Waiting up to 5m0s for pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-containers-cc94r" to be "success or failure"
Dec 19 12:03:39.395: INFO: Pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 41.401552ms
Dec 19 12:03:41.817: INFO: Pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463354521s
Dec 19 12:03:43.935: INFO: Pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58084333s
Dec 19 12:03:46.093: INFO: Pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.739623991s
Dec 19 12:03:48.105: INFO: Pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75080254s
Dec 19 12:03:50.118: INFO: Pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.764507997s
STEP: Saw pod success
Dec 19 12:03:50.118: INFO: Pod "client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:03:50.125: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:03:51.071: INFO: Waiting for pod client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:03:51.082: INFO: Pod client-containers-97d7e04a-2257-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:03:51.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-cc94r" for this suite.
Dec 19 12:03:59.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:03:59.299: INFO: namespace: e2e-tests-containers-cc94r, resource: bindings, ignored listing per whitelist
Dec 19 12:03:59.309: INFO: namespace e2e-tests-containers-cc94r deletion completed in 8.208681998s

• [SLOW TEST:20.321 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:03:59.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-cmnpp
Dec 19 12:04:11.590: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-cmnpp
STEP: checking the pod's current state and verifying that restartCount is present
Dec 19 12:04:11.594: INFO: Initial restart count of pod liveness-http is 0
Dec 19 12:04:33.910: INFO: Restart count of pod e2e-tests-container-probe-cmnpp/liveness-http is now 1 (22.316138449s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:04:33.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-cmnpp" for this suite.
Dec 19 12:04:44.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:04:44.183: INFO: namespace: e2e-tests-container-probe-cmnpp, resource: bindings, ignored listing per whitelist
Dec 19 12:04:44.232: INFO: namespace e2e-tests-container-probe-cmnpp deletion completed in 10.148268338s

• [SLOW TEST:44.923 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:04:44.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 19 12:04:44.419: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 19 12:04:49.441: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:04:51.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-m97bz" for this suite.
Dec 19 12:05:02.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:05:03.658: INFO: namespace: e2e-tests-replication-controller-m97bz, resource: bindings, ignored listing per whitelist
Dec 19 12:05:03.658: INFO: namespace e2e-tests-replication-controller-m97bz deletion completed in 11.721424475s

• [SLOW TEST:19.425 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:05:03.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 19 12:05:16.782: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ca57a960-2257-11ea-a3c6-0242ac110004"
Dec 19 12:05:16.782: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ca57a960-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-pods-vm9mr" to be "terminated due to deadline exceeded"
Dec 19 12:05:16.944: INFO: Pod "pod-update-activedeadlineseconds-ca57a960-2257-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 162.080744ms
Dec 19 12:05:19.091: INFO: Pod "pod-update-activedeadlineseconds-ca57a960-2257-11ea-a3c6-0242ac110004": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.309042411s
Dec 19 12:05:19.091: INFO: Pod "pod-update-activedeadlineseconds-ca57a960-2257-11ea-a3c6-0242ac110004" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:05:19.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vm9mr" for this suite.
Dec 19 12:05:25.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:05:25.379: INFO: namespace: e2e-tests-pods-vm9mr, resource: bindings, ignored listing per whitelist
Dec 19 12:05:25.437: INFO: namespace e2e-tests-pods-vm9mr deletion completed in 6.321415622s

• [SLOW TEST:21.778 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:05:25.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1219 12:06:07.110510       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 19 12:06:07.110: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:06:07.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-84zrm" for this suite.
Dec 19 12:06:15.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:06:17.616: INFO: namespace: e2e-tests-gc-84zrm, resource: bindings, ignored listing per whitelist
Dec 19 12:06:18.597: INFO: namespace e2e-tests-gc-84zrm deletion completed in 11.47579035s

• [SLOW TEST:53.160 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:06:18.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:06:19.422: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.054019ms)
Dec 19 12:06:19.439: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.310247ms)
Dec 19 12:06:19.450: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.401ms)
Dec 19 12:06:19.475: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.886358ms)
Dec 19 12:06:19.679: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 203.547359ms)
Dec 19 12:06:19.702: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.066759ms)
Dec 19 12:06:19.886: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 184.113054ms)
Dec 19 12:06:19.913: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.809699ms)
Dec 19 12:06:19.934: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.665034ms)
Dec 19 12:06:19.951: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.509896ms)
Dec 19 12:06:19.974: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.125079ms)
Dec 19 12:06:20.002: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.893893ms)
Dec 19 12:06:20.019: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.575613ms)
Dec 19 12:06:20.027: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.978155ms)
Dec 19 12:06:20.036: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.067271ms)
Dec 19 12:06:20.047: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.530919ms)
Dec 19 12:06:20.057: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.632479ms)
Dec 19 12:06:20.306: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 248.393785ms)
Dec 19 12:06:20.322: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.735589ms)
Dec 19 12:06:20.352: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 29.03746ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:06:20.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-mwm4q" for this suite.
Dec 19 12:06:27.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:06:27.626: INFO: namespace: e2e-tests-proxy-mwm4q, resource: bindings, ignored listing per whitelist
Dec 19 12:06:27.929: INFO: namespace e2e-tests-proxy-mwm4q deletion completed in 7.566599517s

• [SLOW TEST:9.332 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:06:27.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:06:28.950: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-wqj9n" to be "success or failure"
Dec 19 12:06:29.528: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 577.885781ms
Dec 19 12:06:31.540: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.590464783s
Dec 19 12:06:33.718: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.768041961s
Dec 19 12:06:35.732: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.782178156s
Dec 19 12:06:37.750: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.799882409s
Dec 19 12:06:39.770: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.820680127s
Dec 19 12:06:41.889: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.939085985s
Dec 19 12:06:43.961: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.010758321s
STEP: Saw pod success
Dec 19 12:06:43.961: INFO: Pod "downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:06:43.976: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:06:45.216: INFO: Waiting for pod downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:06:45.373: INFO: Pod downwardapi-volume-fcee4e1a-2257-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:06:45.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wqj9n" for this suite.
Dec 19 12:06:51.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:06:51.568: INFO: namespace: e2e-tests-downward-api-wqj9n, resource: bindings, ignored listing per whitelist
Dec 19 12:06:51.625: INFO: namespace e2e-tests-downward-api-wqj9n deletion completed in 6.241535888s

• [SLOW TEST:23.695 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:06:51.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7ls7l
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 19 12:06:52.289: INFO: Found 0 stateful pods, waiting for 3
Dec 19 12:07:02.341: INFO: Found 1 stateful pods, waiting for 3
Dec 19 12:07:12.302: INFO: Found 2 stateful pods, waiting for 3
Dec 19 12:07:22.378: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:07:22.378: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:07:22.378: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 19 12:07:32.311: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:07:32.311: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:07:32.311: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 19 12:07:32.373: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 19 12:07:42.663: INFO: Updating stateful set ss2
Dec 19 12:07:42.753: INFO: Waiting for Pod e2e-tests-statefulset-7ls7l/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 19 12:07:53.318: INFO: Found 2 stateful pods, waiting for 3
Dec 19 12:08:03.530: INFO: Found 2 stateful pods, waiting for 3
Dec 19 12:08:14.145: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:08:14.145: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:08:14.145: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 19 12:08:23.333: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:08:23.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:08:23.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 19 12:08:23.383: INFO: Updating stateful set ss2
Dec 19 12:08:23.407: INFO: Waiting for Pod e2e-tests-statefulset-7ls7l/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:08:33.453: INFO: Updating stateful set ss2
Dec 19 12:08:33.726: INFO: Waiting for StatefulSet e2e-tests-statefulset-7ls7l/ss2 to complete update
Dec 19 12:08:33.727: INFO: Waiting for Pod e2e-tests-statefulset-7ls7l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:08:43.949: INFO: Waiting for StatefulSet e2e-tests-statefulset-7ls7l/ss2 to complete update
Dec 19 12:08:43.949: INFO: Waiting for Pod e2e-tests-statefulset-7ls7l/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:08:54.196: INFO: Waiting for StatefulSet e2e-tests-statefulset-7ls7l/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 19 12:09:03.779: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7ls7l
Dec 19 12:09:03.789: INFO: Scaling statefulset ss2 to 0
Dec 19 12:09:33.982: INFO: Waiting for statefulset status.replicas updated to 0
Dec 19 12:09:33.990: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:09:34.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7ls7l" for this suite.
Dec 19 12:09:42.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:09:42.493: INFO: namespace: e2e-tests-statefulset-7ls7l, resource: bindings, ignored listing per whitelist
Dec 19 12:09:42.596: INFO: namespace e2e-tests-statefulset-7ls7l deletion completed in 8.389431678s

• [SLOW TEST:170.971 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:09:42.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 19 12:09:42.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zjztw'
Dec 19 12:09:45.156: INFO: stderr: ""
Dec 19 12:09:45.156: INFO: stdout: "pod/pause created\n"
Dec 19 12:09:45.156: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 19 12:09:45.156: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-zjztw" to be "running and ready"
Dec 19 12:09:45.163: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.102669ms
Dec 19 12:09:47.852: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.695868451s
Dec 19 12:09:49.892: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.736351802s
Dec 19 12:09:52.131: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.974597925s
Dec 19 12:09:54.227: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.070593736s
Dec 19 12:09:56.237: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 11.081169395s
Dec 19 12:09:56.237: INFO: Pod "pause" satisfied condition "running and ready"
Dec 19 12:09:56.237: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 19 12:09:56.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-zjztw'
Dec 19 12:09:56.598: INFO: stderr: ""
Dec 19 12:09:56.598: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 19 12:09:56.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-zjztw'
Dec 19 12:09:56.701: INFO: stderr: ""
Dec 19 12:09:56.701: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 19 12:09:56.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-zjztw'
Dec 19 12:09:56.811: INFO: stderr: ""
Dec 19 12:09:56.811: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 19 12:09:56.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-zjztw'
Dec 19 12:09:56.995: INFO: stderr: ""
Dec 19 12:09:56.995: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 19 12:09:56.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zjztw'
Dec 19 12:09:57.223: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 19 12:09:57.223: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 19 12:09:57.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-zjztw'
Dec 19 12:09:57.629: INFO: stderr: "No resources found.\n"
Dec 19 12:09:57.629: INFO: stdout: ""
Dec 19 12:09:57.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-zjztw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 19 12:09:57.855: INFO: stderr: ""
Dec 19 12:09:57.855: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:09:57.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zjztw" for this suite.
Dec 19 12:10:06.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:10:06.038: INFO: namespace: e2e-tests-kubectl-zjztw, resource: bindings, ignored listing per whitelist
Dec 19 12:10:06.146: INFO: namespace e2e-tests-kubectl-zjztw deletion completed in 8.206230604s

• [SLOW TEST:23.550 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:10:06.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 19 12:10:06.433: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:10:24.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-84q2k" for this suite.
Dec 19 12:10:32.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:10:32.366: INFO: namespace: e2e-tests-init-container-84q2k, resource: bindings, ignored listing per whitelist
Dec 19 12:10:32.625: INFO: namespace e2e-tests-init-container-84q2k deletion completed in 8.369204577s

• [SLOW TEST:26.478 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:10:32.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:11:37.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-2gwsb" for this suite.
Dec 19 12:11:46.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:11:46.404: INFO: namespace: e2e-tests-container-runtime-2gwsb, resource: bindings, ignored listing per whitelist
Dec 19 12:11:46.471: INFO: namespace e2e-tests-container-runtime-2gwsb deletion completed in 8.310711917s

• [SLOW TEST:73.845 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:11:46.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 19 12:11:56.902: INFO: Pod pod-hostip-ba6a0c74-2258-11ea-a3c6-0242ac110004 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:11:56.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5dqcp" for this suite.
Dec 19 12:12:23.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:12:23.141: INFO: namespace: e2e-tests-pods-5dqcp, resource: bindings, ignored listing per whitelist
Dec 19 12:12:23.183: INFO: namespace e2e-tests-pods-5dqcp deletion completed in 26.275216466s

• [SLOW TEST:36.712 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:12:23.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 19 12:12:23.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-65gd7'
Dec 19 12:12:23.548: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 19 12:12:23.549: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 19 12:12:27.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-65gd7'
Dec 19 12:12:28.134: INFO: stderr: ""
Dec 19 12:12:28.134: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:12:28.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-65gd7" for this suite.
Dec 19 12:12:34.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:12:34.356: INFO: namespace: e2e-tests-kubectl-65gd7, resource: bindings, ignored listing per whitelist
Dec 19 12:12:34.380: INFO: namespace e2e-tests-kubectl-65gd7 deletion completed in 6.201305723s

• [SLOW TEST:11.197 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:12:34.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:12:34.661: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:12:35.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-frh78" for this suite.
Dec 19 12:12:41.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:12:42.053: INFO: namespace: e2e-tests-custom-resource-definition-frh78, resource: bindings, ignored listing per whitelist
Dec 19 12:12:42.122: INFO: namespace e2e-tests-custom-resource-definition-frh78 deletion completed in 6.217587417s

• [SLOW TEST:7.742 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:12:42.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-db8ca690-2258-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 12:12:42.523: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-kfktf" to be "success or failure"
Dec 19 12:12:42.570: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 47.498967ms
Dec 19 12:12:44.769: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246266993s
Dec 19 12:12:46.793: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269963294s
Dec 19 12:12:49.104: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581357567s
Dec 19 12:12:51.675: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.152246008s
Dec 19 12:12:53.690: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.166984979s
Dec 19 12:12:55.702: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.179292045s
STEP: Saw pod success
Dec 19 12:12:55.702: INFO: Pod "pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:12:55.707: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 19 12:12:56.455: INFO: Waiting for pod pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:12:56.645: INFO: Pod pod-projected-secrets-db92227b-2258-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:12:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kfktf" for this suite.
Dec 19 12:13:04.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:13:04.973: INFO: namespace: e2e-tests-projected-kfktf, resource: bindings, ignored listing per whitelist
Dec 19 12:13:05.003: INFO: namespace e2e-tests-projected-kfktf deletion completed in 8.337582832s

• [SLOW TEST:22.880 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:13:05.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:14:05.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tln9m" for this suite.
Dec 19 12:14:29.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:14:29.307: INFO: namespace: e2e-tests-container-probe-tln9m, resource: bindings, ignored listing per whitelist
Dec 19 12:14:29.386: INFO: namespace e2e-tests-container-probe-tln9m deletion completed in 24.222459349s

• [SLOW TEST:84.383 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:14:29.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:14:57.785: INFO: Container started at 2019-12-19 12:14:41 +0000 UTC, pod became ready at 2019-12-19 12:14:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:14:57.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-b4lxr" for this suite.
Dec 19 12:15:21.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:15:21.923: INFO: namespace: e2e-tests-container-probe-b4lxr, resource: bindings, ignored listing per whitelist
Dec 19 12:15:22.016: INFO: namespace e2e-tests-container-probe-b4lxr deletion completed in 24.221115581s

• [SLOW TEST:52.629 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:15:22.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:15:22.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-v7ssx" to be "success or failure"
Dec 19 12:15:22.230: INFO: Pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.907702ms
Dec 19 12:15:24.361: INFO: Pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137782106s
Dec 19 12:15:26.377: INFO: Pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1540917s
Dec 19 12:15:28.961: INFO: Pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.738156662s
Dec 19 12:15:30.986: INFO: Pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762838799s
Dec 19 12:15:32.997: INFO: Pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.773717211s
STEP: Saw pod success
Dec 19 12:15:32.997: INFO: Pod "downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:15:33.004: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:15:34.230: INFO: Waiting for pod downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:15:34.480: INFO: Pod downwardapi-volume-3acc0c57-2259-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:15:34.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-v7ssx" for this suite.
Dec 19 12:15:40.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:15:40.778: INFO: namespace: e2e-tests-downward-api-v7ssx, resource: bindings, ignored listing per whitelist
Dec 19 12:15:41.088: INFO: namespace e2e-tests-downward-api-v7ssx deletion completed in 6.576183185s

• [SLOW TEST:19.072 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:15:41.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-84wb
STEP: Creating a pod to test atomic-volume-subpath
Dec 19 12:15:41.357: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-84wb" in namespace "e2e-tests-subpath-m9h2c" to be "success or failure"
Dec 19 12:15:41.494: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 136.688836ms
Dec 19 12:15:43.525: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167740924s
Dec 19 12:15:45.555: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197034301s
Dec 19 12:15:48.243: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.884916948s
Dec 19 12:15:50.270: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.912087446s
Dec 19 12:15:52.292: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.934228933s
Dec 19 12:15:54.398: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.040464691s
Dec 19 12:15:56.445: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.087399681s
Dec 19 12:15:58.679: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.321276968s
Dec 19 12:16:00.703: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 19.345316943s
Dec 19 12:16:02.716: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 21.357921743s
Dec 19 12:16:04.738: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 23.380074462s
Dec 19 12:16:06.755: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 25.39704998s
Dec 19 12:16:08.772: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 27.414748076s
Dec 19 12:16:10.792: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 29.434601038s
Dec 19 12:16:12.815: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 31.456945087s
Dec 19 12:16:14.833: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 33.474942995s
Dec 19 12:16:16.859: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Running", Reason="", readiness=false. Elapsed: 35.501009858s
Dec 19 12:16:18.873: INFO: Pod "pod-subpath-test-configmap-84wb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.515592578s
STEP: Saw pod success
Dec 19 12:16:18.873: INFO: Pod "pod-subpath-test-configmap-84wb" satisfied condition "success or failure"
Dec 19 12:16:18.878: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-84wb container test-container-subpath-configmap-84wb: 
STEP: delete the pod
Dec 19 12:16:19.111: INFO: Waiting for pod pod-subpath-test-configmap-84wb to disappear
Dec 19 12:16:19.135: INFO: Pod pod-subpath-test-configmap-84wb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-84wb
Dec 19 12:16:19.135: INFO: Deleting pod "pod-subpath-test-configmap-84wb" in namespace "e2e-tests-subpath-m9h2c"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:16:19.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-m9h2c" for this suite.
Dec 19 12:16:25.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:16:25.451: INFO: namespace: e2e-tests-subpath-m9h2c, resource: bindings, ignored listing per whitelist
Dec 19 12:16:25.501: INFO: namespace e2e-tests-subpath-m9h2c deletion completed in 6.340236149s

• [SLOW TEST:44.411 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:16:25.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-ljsfk
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-ljsfk
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-ljsfk
Dec 19 12:16:25.900: INFO: Found 0 stateful pods, waiting for 1
Dec 19 12:16:35.929: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Dec 19 12:16:45.928: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 19 12:16:45.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 19 12:16:46.456: INFO: stderr: ""
Dec 19 12:16:46.457: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 19 12:16:46.457: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 19 12:16:46.490: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 19 12:16:56.541: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 19 12:16:56.541: INFO: Waiting for statefulset status.replicas updated to 0
Dec 19 12:16:56.706: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:16:56.706: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:16:56.706: INFO: 
Dec 19 12:16:56.706: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 19 12:16:57.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.932719236s
Dec 19 12:16:59.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.91591016s
Dec 19 12:17:00.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.781290847s
Dec 19 12:17:01.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.756257432s
Dec 19 12:17:02.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.71597517s
Dec 19 12:17:04.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.688864186s
Dec 19 12:17:06.638: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.530481359s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-ljsfk
Dec 19 12:17:07.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:17:08.814: INFO: stderr: ""
Dec 19 12:17:08.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 19 12:17:08.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 19 12:17:08.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:17:09.332: INFO: rc: 1
Dec 19 12:17:09.332: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00123c2a0 exit status 1   true [0xc000cfe000 0xc000cfe030 0xc000cfe048] [0xc000cfe000 0xc000cfe030 0xc000cfe048] [0xc000cfe028 0xc000cfe040] [0x935700 0x935700] 0xc000a943c0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 19 12:17:19.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:17:19.919: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 19 12:17:19.919: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 19 12:17:19.919: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 19 12:17:19.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:17:20.457: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 19 12:17:20.457: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 19 12:17:20.457: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 19 12:17:20.500: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:17:20.500: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:17:20.500: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 19 12:17:20.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 19 12:17:21.184: INFO: stderr: ""
Dec 19 12:17:21.184: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 19 12:17:21.184: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 19 12:17:21.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 19 12:17:21.588: INFO: stderr: ""
Dec 19 12:17:21.588: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 19 12:17:21.588: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 19 12:17:21.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 19 12:17:22.120: INFO: stderr: ""
Dec 19 12:17:22.120: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 19 12:17:22.120: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 19 12:17:22.120: INFO: Waiting for statefulset status.replicas updated to 0
Dec 19 12:17:22.163: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Dec 19 12:17:32.202: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 19 12:17:32.202: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 19 12:17:32.202: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 19 12:17:32.257: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:32.257: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:32.257: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:32.257: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:32.257: INFO: 
Dec 19 12:17:32.257: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:33.296: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:33.296: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:33.296: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:33.296: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:33.296: INFO: 
Dec 19 12:17:33.296: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:34.501: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:34.501: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:34.501: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:34.501: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:34.501: INFO: 
Dec 19 12:17:34.501: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:35.599: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:35.599: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:35.599: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:35.599: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:35.599: INFO: 
Dec 19 12:17:35.599: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:36.625: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:36.625: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:36.625: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:36.625: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:36.625: INFO: 
Dec 19 12:17:36.626: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:38.024: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:38.024: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:38.024: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:38.024: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:38.025: INFO: 
Dec 19 12:17:38.025: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:39.041: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:39.041: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:39.041: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:39.041: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:39.041: INFO: 
Dec 19 12:17:39.041: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:40.857: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:40.857: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:40.857: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:40.857: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:40.858: INFO: 
Dec 19 12:17:40.858: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 19 12:17:41.997: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 19 12:17:41.997: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:26 +0000 UTC  }]
Dec 19 12:17:41.997: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:41.997: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:17:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:16:56 +0000 UTC  }]
Dec 19 12:17:41.997: INFO: 
Dec 19 12:17:41.997: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-ljsfk
Dec 19 12:17:43.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:17:43.252: INFO: rc: 1
Dec 19 12:17:43.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0021f89f0 exit status 1   true [0xc000cfe1d8 0xc000cfe1f0 0xc000cfe208] [0xc000cfe1d8 0xc000cfe1f0 0xc000cfe208] [0xc000cfe1e8 0xc000cfe200] [0x935700 0x935700] 0xc001d08a80 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 19 12:17:53.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:17:53.407: INFO: rc: 1
Dec 19 12:17:53.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021f8b70 exit status 1   true [0xc000cfe210 0xc000cfe228 0xc000cfe240] [0xc000cfe210 0xc000cfe228 0xc000cfe240] [0xc000cfe220 0xc000cfe238] [0x935700 0x935700] 0xc001d08d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:18:03.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:18:03.566: INFO: rc: 1
Dec 19 12:18:03.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c0ee40 exit status 1   true [0xc00161a0c8 0xc00161a108 0xc00161a158] [0xc00161a0c8 0xc00161a108 0xc00161a158] [0xc00161a0f8 0xc00161a130] [0x935700 0x935700] 0xc001e0d2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:18:13.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:18:13.722: INFO: rc: 1
Dec 19 12:18:13.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000b071d0 exit status 1   true [0xc000a2e0b0 0xc000a2e0c8 0xc000a2e0e0] [0xc000a2e0b0 0xc000a2e0c8 0xc000a2e0e0] [0xc000a2e0c0 0xc000a2e0d8] [0x935700 0x935700] 0xc00248b800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:18:23.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:18:23.894: INFO: rc: 1
Dec 19 12:18:23.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001669d10 exit status 1   true [0xc0015ce1e0 0xc0015ce1f8 0xc0015ce210] [0xc0015ce1e0 0xc0015ce1f8 0xc0015ce210] [0xc0015ce1f0 0xc0015ce208] [0x935700 0x935700] 0xc002308f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:18:33.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:18:34.018: INFO: rc: 1
Dec 19 12:18:34.018: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001669e30 exit status 1   true [0xc0015ce218 0xc0015ce230 0xc0015ce248] [0xc0015ce218 0xc0015ce230 0xc0015ce248] [0xc0015ce228 0xc0015ce240] [0x935700 0x935700] 0xc002309f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:18:44.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:18:44.114: INFO: rc: 1
Dec 19 12:18:44.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001669f80 exit status 1   true [0xc0015ce250 0xc0015ce268 0xc0015ce280] [0xc0015ce250 0xc0015ce268 0xc0015ce280] [0xc0015ce260 0xc0015ce278] [0x935700 0x935700] 0xc001bb0fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:18:54.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:18:54.251: INFO: rc: 1
Dec 19 12:18:54.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c0f050 exit status 1   true [0xc00161a168 0xc00161a1c0 0xc00161a1f8] [0xc00161a168 0xc00161a1c0 0xc00161a1f8] [0xc00161a190 0xc00161a1e8] [0x935700 0x935700] 0xc001e0d5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:19:04.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:19:04.412: INFO: rc: 1
Dec 19 12:19:04.412: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021f8d20 exit status 1   true [0xc000cfe248 0xc000cfe260 0xc000cfe278] [0xc000cfe248 0xc000cfe260 0xc000cfe278] [0xc000cfe258 0xc000cfe270] [0x935700 0x935700] 0xc001d09260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:19:14.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:19:14.830: INFO: rc: 1
Dec 19 12:19:14.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00020d5f0 exit status 1   true [0xc00161a000 0xc00161a030 0xc00161a070] [0xc00161a000 0xc00161a030 0xc00161a070] [0xc00161a020 0xc00161a058] [0x935700 0x935700] 0xc002308ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:19:24.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:19:25.011: INFO: rc: 1
Dec 19 12:19:25.012: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00020d770 exit status 1   true [0xc00161a088 0xc00161a0b8 0xc00161a0f8] [0xc00161a088 0xc00161a0b8 0xc00161a0f8] [0xc00161a0a8 0xc00161a0e0] [0x935700 0x935700] 0xc002309e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:19:35.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:19:35.174: INFO: rc: 1
Dec 19 12:19:35.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00020d8c0 exit status 1   true [0xc00161a108 0xc00161a158 0xc00161a190] [0xc00161a108 0xc00161a158 0xc00161a190] [0xc00161a130 0xc00161a180] [0x935700 0x935700] 0xc001d3cae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:19:45.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:19:45.272: INFO: rc: 1
Dec 19 12:19:45.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00123c1e0 exit status 1   true [0xc0015ce000 0xc0015ce018 0xc0015ce030] [0xc0015ce000 0xc0015ce018 0xc0015ce030] [0xc0015ce010 0xc0015ce028] [0x935700 0x935700] 0xc000a943c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:19:55.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:19:55.401: INFO: rc: 1
Dec 19 12:19:55.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000552d80 exit status 1   true [0xc000a2e000 0xc000a2e018 0xc000a2e030] [0xc000a2e000 0xc000a2e018 0xc000a2e030] [0xc000a2e010 0xc000a2e028] [0x935700 0x935700] 0xc001d08480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:20:05.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:20:05.551: INFO: rc: 1
Dec 19 12:20:05.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000552f00 exit status 1   true [0xc000a2e038 0xc000a2e050 0xc000a2e068] [0xc000a2e038 0xc000a2e050 0xc000a2e068] [0xc000a2e048 0xc000a2e060] [0x935700 0x935700] 0xc001d08900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:20:15.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:20:15.685: INFO: rc: 1
Dec 19 12:20:15.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000553050 exit status 1   true [0xc000a2e070 0xc000a2e088 0xc000a2e0a0] [0xc000a2e070 0xc000a2e088 0xc000a2e0a0] [0xc000a2e080 0xc000a2e098] [0x935700 0x935700] 0xc001d08c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:20:25.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:20:25.841: INFO: rc: 1
Dec 19 12:20:25.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00020da10 exit status 1   true [0xc00161a1c0 0xc00161a1f8 0xc00161a230] [0xc00161a1c0 0xc00161a1f8 0xc00161a230] [0xc00161a1e8 0xc00161a220] [0x935700 0x935700] 0xc001d3ce40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:20:35.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:20:35.997: INFO: rc: 1
Dec 19 12:20:35.997: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c0e330 exit status 1   true [0xc000cfe000 0xc000cfe030 0xc000cfe048] [0xc000cfe000 0xc000cfe030 0xc000cfe048] [0xc000cfe028 0xc000cfe040] [0x935700 0x935700] 0xc001e0cde0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:20:45.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:20:46.106: INFO: rc: 1
Dec 19 12:20:46.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c0e4b0 exit status 1   true [0xc000cfe050 0xc000cfe090 0xc000cfe0b8] [0xc000cfe050 0xc000cfe090 0xc000cfe0b8] [0xc000cfe088 0xc000cfe0b0] [0x935700 0x935700] 0xc001e0d0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:20:56.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:20:56.218: INFO: rc: 1
Dec 19 12:20:56.218: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c0e5d0 exit status 1   true [0xc000cfe0c0 0xc000cfe0d8 0xc000cfe0f0] [0xc000cfe0c0 0xc000cfe0d8 0xc000cfe0f0] [0xc000cfe0d0 0xc000cfe0e8] [0x935700 0x935700] 0xc001e0d3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:21:06.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:21:06.370: INFO: rc: 1
Dec 19 12:21:06.370: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0005531a0 exit status 1   true [0xc000a2e0a8 0xc000a2e0c0 0xc000a2e0d8] [0xc000a2e0a8 0xc000a2e0c0 0xc000a2e0d8] [0xc000a2e0b8 0xc000a2e0d0] [0x935700 0x935700] 0xc001d09080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:21:16.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:21:16.575: INFO: rc: 1
Dec 19 12:21:16.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00020d620 exit status 1   true [0xc00161a000 0xc00161a030 0xc00161a070] [0xc00161a000 0xc00161a030 0xc00161a070] [0xc00161a020 0xc00161a058] [0x935700 0x935700] 0xc002308ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:21:26.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:21:26.735: INFO: rc: 1
Dec 19 12:21:26.736: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000552db0 exit status 1   true [0xc000a2e000 0xc000a2e018 0xc000a2e030] [0xc000a2e000 0xc000a2e018 0xc000a2e030] [0xc000a2e010 0xc000a2e028] [0x935700 0x935700] 0xc001d3cb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:21:36.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:21:36.876: INFO: rc: 1
Dec 19 12:21:36.876: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000552ed0 exit status 1   true [0xc000a2e038 0xc000a2e050 0xc000a2e068] [0xc000a2e038 0xc000a2e050 0xc000a2e068] [0xc000a2e048 0xc000a2e060] [0x935700 0x935700] 0xc001d3cea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:21:46.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:21:46.992: INFO: rc: 1
Dec 19 12:21:46.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000553080 exit status 1   true [0xc000a2e070 0xc000a2e088 0xc000a2e0a0] [0xc000a2e070 0xc000a2e088 0xc000a2e0a0] [0xc000a2e080 0xc000a2e098] [0x935700 0x935700] 0xc001d3d560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:21:56.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:21:57.117: INFO: rc: 1
Dec 19 12:21:57.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c0e360 exit status 1   true [0xc0015ce000 0xc0015ce018 0xc0015ce030] [0xc0015ce000 0xc0015ce018 0xc0015ce030] [0xc0015ce010 0xc0015ce028] [0x935700 0x935700] 0xc001d08480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:22:07.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:22:07.271: INFO: rc: 1
Dec 19 12:22:07.271: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00020d7d0 exit status 1   true [0xc00161a088 0xc00161a0b8 0xc00161a0f8] [0xc00161a088 0xc00161a0b8 0xc00161a0f8] [0xc00161a0a8 0xc00161a0e0] [0x935700 0x935700] 0xc002309e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:22:17.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:22:17.438: INFO: rc: 1
Dec 19 12:22:17.439: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00123c240 exit status 1   true [0xc000cfe000 0xc000cfe030 0xc000cfe048] [0xc000cfe000 0xc000cfe030 0xc000cfe048] [0xc000cfe028 0xc000cfe040] [0x935700 0x935700] 0xc000a943c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:22:27.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:22:27.669: INFO: rc: 1
Dec 19 12:22:27.670: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00020d950 exit status 1   true [0xc00161a108 0xc00161a158 0xc00161a190] [0xc00161a108 0xc00161a158 0xc00161a190] [0xc00161a130 0xc00161a180] [0x935700 0x935700] 0xc001e0cd80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:22:37.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:22:37.763: INFO: rc: 1
Dec 19 12:22:37.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c0e510 exit status 1   true [0xc0015ce038 0xc0015ce050 0xc0015ce068] [0xc0015ce038 0xc0015ce050 0xc0015ce068] [0xc0015ce048 0xc0015ce060] [0x935700 0x935700] 0xc001d08900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 19 12:22:47.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ljsfk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:22:47.913: INFO: rc: 1
Dec 19 12:22:47.913: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 19 12:22:47.913: INFO: Scaling statefulset ss to 0
Dec 19 12:22:47.935: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 19 12:22:47.939: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ljsfk
Dec 19 12:22:47.943: INFO: Scaling statefulset ss to 0
Dec 19 12:22:47.961: INFO: Waiting for statefulset status.replicas updated to 0
Dec 19 12:22:47.967: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:22:48.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-ljsfk" for this suite.
Dec 19 12:22:56.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:22:56.154: INFO: namespace: e2e-tests-statefulset-ljsfk, resource: bindings, ignored listing per whitelist
Dec 19 12:22:56.369: INFO: namespace e2e-tests-statefulset-ljsfk deletion completed in 8.34731264s

• [SLOW TEST:390.868 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:22:56.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 19 12:22:56.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vkvwp'
Dec 19 12:22:58.675: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 19 12:22:58.675: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 19 12:23:00.714: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-8npzt]
Dec 19 12:23:00.714: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-8npzt" in namespace "e2e-tests-kubectl-vkvwp" to be "running and ready"
Dec 19 12:23:00.722: INFO: Pod "e2e-test-nginx-rc-8npzt": Phase="Pending", Reason="", readiness=false. Elapsed: 7.151408ms
Dec 19 12:23:02.751: INFO: Pod "e2e-test-nginx-rc-8npzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037108334s
Dec 19 12:23:05.153: INFO: Pod "e2e-test-nginx-rc-8npzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43822831s
Dec 19 12:23:07.171: INFO: Pod "e2e-test-nginx-rc-8npzt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.457040776s
Dec 19 12:23:09.186: INFO: Pod "e2e-test-nginx-rc-8npzt": Phase="Running", Reason="", readiness=true. Elapsed: 8.472123453s
Dec 19 12:23:09.187: INFO: Pod "e2e-test-nginx-rc-8npzt" satisfied condition "running and ready"
Dec 19 12:23:09.187: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-8npzt]
Dec 19 12:23:09.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vkvwp'
Dec 19 12:23:09.409: INFO: stderr: ""
Dec 19 12:23:09.409: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 19 12:23:09.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vkvwp'
Dec 19 12:23:09.614: INFO: stderr: ""
Dec 19 12:23:09.614: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:23:09.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vkvwp" for this suite.
Dec 19 12:23:33.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:23:33.736: INFO: namespace: e2e-tests-kubectl-vkvwp, resource: bindings, ignored listing per whitelist
Dec 19 12:23:33.862: INFO: namespace e2e-tests-kubectl-vkvwp deletion completed in 24.238661329s

• [SLOW TEST:37.494 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:23:33.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-60061dde-225a-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 19 12:23:34.256: INFO: Waiting up to 5m0s for pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-wb7hn" to be "success or failure"
Dec 19 12:23:34.281: INFO: Pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.867285ms
Dec 19 12:23:36.739: INFO: Pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482288555s
Dec 19 12:23:38.759: INFO: Pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.502217832s
Dec 19 12:23:41.687: INFO: Pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.430097286s
Dec 19 12:23:43.713: INFO: Pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.456470304s
Dec 19 12:23:45.767: INFO: Pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.51073719s
STEP: Saw pod success
Dec 19 12:23:45.767: INFO: Pod "pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:23:45.780: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 19 12:23:46.051: INFO: Waiting for pod pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:23:46.139: INFO: Pod pod-configmaps-600f8ae5-225a-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:23:46.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wb7hn" for this suite.
Dec 19 12:23:52.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:23:52.409: INFO: namespace: e2e-tests-configmap-wb7hn, resource: bindings, ignored listing per whitelist
Dec 19 12:23:52.424: INFO: namespace e2e-tests-configmap-wb7hn deletion completed in 6.256942111s

• [SLOW TEST:18.559 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:23:52.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-6b170313-225a-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 19 12:23:52.810: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-jncdf" to be "success or failure"
Dec 19 12:23:52.841: INFO: Pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.04152ms
Dec 19 12:23:54.876: INFO: Pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065532816s
Dec 19 12:23:56.902: INFO: Pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092067825s
Dec 19 12:23:59.193: INFO: Pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382261997s
Dec 19 12:24:01.214: INFO: Pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.403674465s
Dec 19 12:24:03.222: INFO: Pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.411408822s
STEP: Saw pod success
Dec 19 12:24:03.222: INFO: Pod "pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:24:03.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 19 12:24:03.326: INFO: Waiting for pod pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:24:03.337: INFO: Pod pod-configmaps-6b209e56-225a-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:24:03.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jncdf" for this suite.
Dec 19 12:24:11.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:24:11.496: INFO: namespace: e2e-tests-configmap-jncdf, resource: bindings, ignored listing per whitelist
Dec 19 12:24:11.672: INFO: namespace e2e-tests-configmap-jncdf deletion completed in 8.327136589s

• [SLOW TEST:19.248 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:24:11.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-n654j
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n654j to expose endpoints map[]
Dec 19 12:24:12.569: INFO: Get endpoints failed (21.68455ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 19 12:24:13.584: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n654j exposes endpoints map[] (1.03733089s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-n654j
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n654j to expose endpoints map[pod1:[100]]
Dec 19 12:24:18.566: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.922599644s elapsed, will retry)
Dec 19 12:24:24.508: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (10.864704741s elapsed, will retry)
Dec 19 12:24:25.527: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n654j exposes endpoints map[pod1:[100]] (11.884165649s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-n654j
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n654j to expose endpoints map[pod1:[100] pod2:[101]]
Dec 19 12:24:30.927: INFO: Unexpected endpoints: found map[778951d2-225a-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.372109555s elapsed, will retry)
Dec 19 12:24:38.062: INFO: Unexpected endpoints: found map[778951d2-225a-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (12.506638789s elapsed, will retry)
Dec 19 12:24:39.081: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n654j exposes endpoints map[pod2:[101] pod1:[100]] (13.526116384s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-n654j
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n654j to expose endpoints map[pod2:[101]]
Dec 19 12:24:40.146: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n654j exposes endpoints map[pod2:[101]] (1.058330879s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-n654j
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n654j to expose endpoints map[]
Dec 19 12:24:41.206: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n654j exposes endpoints map[] (1.014151612s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:24:42.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-n654j" for this suite.
Dec 19 12:25:06.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:25:07.015: INFO: namespace: e2e-tests-services-n654j, resource: bindings, ignored listing per whitelist
Dec 19 12:25:07.114: INFO: namespace e2e-tests-services-n654j deletion completed in 24.313777924s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:55.441 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:25:07.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:25:07.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-jf458" to be "success or failure"
Dec 19 12:25:07.296: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 57.86559ms
Dec 19 12:25:09.327: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088508335s
Dec 19 12:25:11.343: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104101957s
Dec 19 12:25:13.419: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180123809s
Dec 19 12:25:15.441: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20241678s
Dec 19 12:25:17.463: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224733308s
Dec 19 12:25:19.480: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.241479264s
STEP: Saw pod success
Dec 19 12:25:19.480: INFO: Pod "downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:25:19.486: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:25:19.991: INFO: Waiting for pod downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:25:20.354: INFO: Pod downwardapi-volume-977dd05e-225a-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:25:20.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jf458" for this suite.
Dec 19 12:25:26.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:25:26.673: INFO: namespace: e2e-tests-downward-api-jf458, resource: bindings, ignored listing per whitelist
Dec 19 12:25:26.800: INFO: namespace e2e-tests-downward-api-jf458 deletion completed in 6.425263265s

• [SLOW TEST:19.687 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:25:26.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1219 12:25:30.222053       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 19 12:25:30.222: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:25:30.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-pgk6k" for this suite.
Dec 19 12:25:37.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:25:37.210: INFO: namespace: e2e-tests-gc-pgk6k, resource: bindings, ignored listing per whitelist
Dec 19 12:25:37.252: INFO: namespace e2e-tests-gc-pgk6k deletion completed in 6.991074292s

• [SLOW TEST:10.451 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:25:37.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:25:37.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-lbjhk" to be "success or failure"
Dec 19 12:25:37.569: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 42.432004ms
Dec 19 12:25:39.846: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31897391s
Dec 19 12:25:41.891: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364028919s
Dec 19 12:25:44.711: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.184212259s
Dec 19 12:25:46.726: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.199521021s
Dec 19 12:25:48.739: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.212331641s
Dec 19 12:25:51.181: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.654113662s
STEP: Saw pod success
Dec 19 12:25:51.181: INFO: Pod "downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:25:51.199: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:25:51.386: INFO: Waiting for pod downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:25:51.398: INFO: Pod downwardapi-volume-a98a08a8-225a-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:25:51.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lbjhk" for this suite.
Dec 19 12:25:57.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:25:57.585: INFO: namespace: e2e-tests-projected-lbjhk, resource: bindings, ignored listing per whitelist
Dec 19 12:25:57.677: INFO: namespace e2e-tests-projected-lbjhk deletion completed in 6.265115841s

• [SLOW TEST:20.425 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:25:57.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:25:58.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-vglrj" to be "success or failure"
Dec 19 12:25:58.059: INFO: Pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.008508ms
Dec 19 12:26:00.081: INFO: Pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033264408s
Dec 19 12:26:02.094: INFO: Pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046472381s
Dec 19 12:26:04.572: INFO: Pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524512878s
Dec 19 12:26:06.639: INFO: Pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.591914106s
Dec 19 12:26:08.676: INFO: Pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.628824194s
STEP: Saw pod success
Dec 19 12:26:08.676: INFO: Pod "downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:26:08.683: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:26:09.036: INFO: Waiting for pod downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:26:09.046: INFO: Pod downwardapi-volume-b5b9f9ce-225a-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:26:09.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vglrj" for this suite.
Dec 19 12:26:17.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:26:17.247: INFO: namespace: e2e-tests-downward-api-vglrj, resource: bindings, ignored listing per whitelist
Dec 19 12:26:17.304: INFO: namespace e2e-tests-downward-api-vglrj deletion completed in 8.249963355s

• [SLOW TEST:19.627 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:26:17.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 19 12:26:17.540: INFO: Waiting up to 5m0s for pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004" in namespace "e2e-tests-var-expansion-zmqfq" to be "success or failure"
Dec 19 12:26:17.677: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 136.030545ms
Dec 19 12:26:19.694: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153669109s
Dec 19 12:26:21.738: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197230583s
Dec 19 12:26:23.785: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244587446s
Dec 19 12:26:25.833: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292806234s
Dec 19 12:26:27.857: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.316960796s
Dec 19 12:26:29.878: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.337593861s
STEP: Saw pod success
Dec 19 12:26:29.878: INFO: Pod "var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:26:29.887: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 19 12:26:29.975: INFO: Waiting for pod var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:26:29.988: INFO: Pod var-expansion-c165d1d8-225a-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:26:29.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zmqfq" for this suite.
Dec 19 12:26:38.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:26:38.293: INFO: namespace: e2e-tests-var-expansion-zmqfq, resource: bindings, ignored listing per whitelist
Dec 19 12:26:38.327: INFO: namespace e2e-tests-var-expansion-zmqfq deletion completed in 8.333217187s

• [SLOW TEST:21.022 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:26:38.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 19 12:26:38.557: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 19 12:26:38.588: INFO: Waiting for terminating namespaces to be deleted...
Dec 19 12:26:38.595: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 19 12:26:38.619: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 19 12:26:38.619: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 19 12:26:38.619: INFO: 	Container weave ready: true, restart count 0
Dec 19 12:26:38.619: INFO: 	Container weave-npc ready: true, restart count 0
Dec 19 12:26:38.619: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 19 12:26:38.619: INFO: 	Container coredns ready: true, restart count 0
Dec 19 12:26:38.619: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 19 12:26:38.619: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 19 12:26:38.619: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 19 12:26:38.619: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 19 12:26:38.619: INFO: 	Container coredns ready: true, restart count 0
Dec 19 12:26:38.619: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 19 12:26:38.619: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 19 12:26:38.793: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ce126823-225a-11ea-a3c6-0242ac110004.15e1c568e26fadfb], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-jcq7f/filler-pod-ce126823-225a-11ea-a3c6-0242ac110004 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ce126823-225a-11ea-a3c6-0242ac110004.15e1c56a09788cf8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ce126823-225a-11ea-a3c6-0242ac110004.15e1c56b00c31bbf], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ce126823-225a-11ea-a3c6-0242ac110004.15e1c56b2f45297b], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e1c56bb1d5171b], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:26:52.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-jcq7f" for this suite.
Dec 19 12:26:58.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:26:59.014: INFO: namespace: e2e-tests-sched-pred-jcq7f, resource: bindings, ignored listing per whitelist
Dec 19 12:26:59.110: INFO: namespace e2e-tests-sched-pred-jcq7f deletion completed in 6.692567139s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.782 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:26:59.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-daf96a2c-225a-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 19 12:27:00.528: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-wkpmk" to be "success or failure"
Dec 19 12:27:00.683: INFO: Pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 154.684356ms
Dec 19 12:27:03.241: INFO: Pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713076576s
Dec 19 12:27:05.256: INFO: Pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.727635798s
Dec 19 12:27:08.747: INFO: Pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21917186s
Dec 19 12:27:10.763: INFO: Pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.234692606s
Dec 19 12:27:12.793: INFO: Pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.264316978s
STEP: Saw pod success
Dec 19 12:27:12.793: INFO: Pod "pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:27:12.804: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 19 12:27:13.197: INFO: Waiting for pod pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:27:13.440: INFO: Pod pod-projected-configmaps-dafd401c-225a-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:27:13.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wkpmk" for this suite.
Dec 19 12:27:20.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:27:20.608: INFO: namespace: e2e-tests-projected-wkpmk, resource: bindings, ignored listing per whitelist
Dec 19 12:27:20.644: INFO: namespace e2e-tests-projected-wkpmk deletion completed in 7.190725997s

• [SLOW TEST:21.533 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:27:20.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-f7p2g
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 19 12:27:20.977: INFO: Found 0 stateful pods, waiting for 3
Dec 19 12:27:31.008: INFO: Found 1 stateful pods, waiting for 3
Dec 19 12:27:40.994: INFO: Found 2 stateful pods, waiting for 3
Dec 19 12:27:51.000: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:27:51.000: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:27:51.000: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 19 12:28:00.997: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:28:00.997: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:28:00.997: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 19 12:28:01.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7p2g ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 19 12:28:01.732: INFO: stderr: ""
Dec 19 12:28:01.732: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 19 12:28:01.732: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 19 12:28:02.024: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 19 12:28:12.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7p2g ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:28:12.865: INFO: stderr: ""
Dec 19 12:28:12.865: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 19 12:28:12.865: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 19 12:28:13.062: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:28:13.062: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:13.062: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:13.062: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:23.320: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:28:23.320: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:23.320: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:33.117: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:28:33.117: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:33.117: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:43.910: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:28:43.910: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:28:53.935: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:28:53.935: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 19 12:29:03.216: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 19 12:29:13.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7p2g ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 19 12:29:13.858: INFO: stderr: ""
Dec 19 12:29:13.858: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 19 12:29:13.858: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 19 12:29:23.979: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 19 12:29:34.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f7p2g ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 19 12:29:34.813: INFO: stderr: ""
Dec 19 12:29:34.813: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 19 12:29:34.813: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 19 12:29:44.903: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:29:44.904: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:29:44.904: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:29:44.904: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:29:54.994: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:29:54.994: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:29:54.994: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:30:04.926: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:30:04.926: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:30:04.926: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:30:14.944: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:30:14.944: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:30:14.944: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:30:24.932: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:30:24.933: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:30:34.928: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
Dec 19 12:30:34.928: INFO: Waiting for Pod e2e-tests-statefulset-f7p2g/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 19 12:30:44.927: INFO: Waiting for StatefulSet e2e-tests-statefulset-f7p2g/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 19 12:30:54.944: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f7p2g
Dec 19 12:30:54.952: INFO: Scaling statefulset ss2 to 0
Dec 19 12:31:35.114: INFO: Waiting for statefulset status.replicas updated to 0
Dec 19 12:31:35.122: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:31:35.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-f7p2g" for this suite.
Dec 19 12:31:43.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:31:43.545: INFO: namespace: e2e-tests-statefulset-f7p2g, resource: bindings, ignored listing per whitelist
Dec 19 12:31:43.579: INFO: namespace e2e-tests-statefulset-f7p2g deletion completed in 8.393941793s

• [SLOW TEST:262.935 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:31:43.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 19 12:31:44.020: INFO: Waiting up to 5m0s for pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-rkj2j" to be "success or failure"
Dec 19 12:31:44.057: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 36.681638ms
Dec 19 12:31:46.075: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055316986s
Dec 19 12:31:48.083: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062705962s
Dec 19 12:31:50.110: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090357684s
Dec 19 12:31:52.159: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139065888s
Dec 19 12:31:54.174: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153451036s
Dec 19 12:31:56.305: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.285293059s
STEP: Saw pod success
Dec 19 12:31:56.306: INFO: Pod "pod-83faea6e-225b-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:31:56.312: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-83faea6e-225b-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:31:56.378: INFO: Waiting for pod pod-83faea6e-225b-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:31:56.405: INFO: Pod pod-83faea6e-225b-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:31:56.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rkj2j" for this suite.
Dec 19 12:32:04.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:32:04.792: INFO: namespace: e2e-tests-emptydir-rkj2j, resource: bindings, ignored listing per whitelist
Dec 19 12:32:04.799: INFO: namespace e2e-tests-emptydir-rkj2j deletion completed in 8.288564705s

• [SLOW TEST:21.220 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:32:04.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 19 12:32:05.279: INFO: Waiting up to 5m0s for pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-xjttc" to be "success or failure"
Dec 19 12:32:05.289: INFO: Pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.492214ms
Dec 19 12:32:07.309: INFO: Pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030306512s
Dec 19 12:32:09.321: INFO: Pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04240221s
Dec 19 12:32:11.814: INFO: Pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535603707s
Dec 19 12:32:13.900: INFO: Pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.621465367s
Dec 19 12:32:15.917: INFO: Pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.637747612s
STEP: Saw pod success
Dec 19 12:32:15.917: INFO: Pod "pod-90a9502d-225b-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:32:15.925: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-90a9502d-225b-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:32:16.182: INFO: Waiting for pod pod-90a9502d-225b-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:32:16.187: INFO: Pod pod-90a9502d-225b-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:32:16.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xjttc" for this suite.
Dec 19 12:32:22.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:32:22.619: INFO: namespace: e2e-tests-emptydir-xjttc, resource: bindings, ignored listing per whitelist
Dec 19 12:32:22.623: INFO: namespace e2e-tests-emptydir-xjttc deletion completed in 6.362041917s

• [SLOW TEST:17.823 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:32:22.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-lfk2s
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-lfk2s
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-lfk2s
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-lfk2s
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-lfk2s
Dec 19 12:32:37.051: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-lfk2s, name: ss-0, uid: a3919d43-225b-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 19 12:32:37.420: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-lfk2s, name: ss-0, uid: a3919d43-225b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 19 12:32:37.466: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-lfk2s, name: ss-0, uid: a3919d43-225b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 19 12:32:37.563: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-lfk2s
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-lfk2s
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-lfk2s and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 19 12:32:53.340: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lfk2s
Dec 19 12:32:53.347: INFO: Scaling statefulset ss to 0
Dec 19 12:33:03.456: INFO: Waiting for statefulset status.replicas updated to 0
Dec 19 12:33:03.463: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:33:03.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-lfk2s" for this suite.
Dec 19 12:33:11.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:33:11.752: INFO: namespace: e2e-tests-statefulset-lfk2s, resource: bindings, ignored listing per whitelist
Dec 19 12:33:11.856: INFO: namespace e2e-tests-statefulset-lfk2s deletion completed in 8.34422211s

• [SLOW TEST:49.233 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:33:11.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 19 12:33:12.064: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:33:12.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4xtxx" for this suite.
Dec 19 12:33:18.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:33:18.323: INFO: namespace: e2e-tests-kubectl-4xtxx, resource: bindings, ignored listing per whitelist
Dec 19 12:33:18.370: INFO: namespace e2e-tests-kubectl-4xtxx deletion completed in 6.178410592s

• [SLOW TEST:6.513 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:33:18.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 19 12:33:18.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:21.317: INFO: stderr: ""
Dec 19 12:33:21.317: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 19 12:33:21.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:21.530: INFO: stderr: ""
Dec 19 12:33:21.530: INFO: stdout: "update-demo-nautilus-vbt4t update-demo-nautilus-xmf5b "
Dec 19 12:33:21.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbt4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:21.662: INFO: stderr: ""
Dec 19 12:33:21.663: INFO: stdout: ""
Dec 19 12:33:21.663: INFO: update-demo-nautilus-vbt4t is created but not running
Dec 19 12:33:26.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:26.811: INFO: stderr: ""
Dec 19 12:33:26.811: INFO: stdout: "update-demo-nautilus-vbt4t update-demo-nautilus-xmf5b "
Dec 19 12:33:26.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbt4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:26.937: INFO: stderr: ""
Dec 19 12:33:26.937: INFO: stdout: ""
Dec 19 12:33:26.937: INFO: update-demo-nautilus-vbt4t is created but not running
Dec 19 12:33:31.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:32.161: INFO: stderr: ""
Dec 19 12:33:32.161: INFO: stdout: "update-demo-nautilus-vbt4t update-demo-nautilus-xmf5b "
Dec 19 12:33:32.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbt4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:32.665: INFO: stderr: ""
Dec 19 12:33:32.665: INFO: stdout: ""
Dec 19 12:33:32.665: INFO: update-demo-nautilus-vbt4t is created but not running
Dec 19 12:33:37.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:37.845: INFO: stderr: ""
Dec 19 12:33:37.845: INFO: stdout: "update-demo-nautilus-vbt4t update-demo-nautilus-xmf5b "
Dec 19 12:33:37.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbt4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:37.967: INFO: stderr: ""
Dec 19 12:33:37.967: INFO: stdout: "true"
Dec 19 12:33:37.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbt4t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:38.073: INFO: stderr: ""
Dec 19 12:33:38.073: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 19 12:33:38.073: INFO: validating pod update-demo-nautilus-vbt4t
Dec 19 12:33:38.096: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 19 12:33:38.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 19 12:33:38.096: INFO: update-demo-nautilus-vbt4t is verified up and running
Dec 19 12:33:38.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmf5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:38.204: INFO: stderr: ""
Dec 19 12:33:38.204: INFO: stdout: "true"
Dec 19 12:33:38.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmf5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:33:38.291: INFO: stderr: ""
Dec 19 12:33:38.291: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 19 12:33:38.291: INFO: validating pod update-demo-nautilus-xmf5b
Dec 19 12:33:38.303: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 19 12:33:38.303: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 19 12:33:38.303: INFO: update-demo-nautilus-xmf5b is verified up and running
STEP: rolling-update to new replication controller
Dec 19 12:33:38.305: INFO: scanned /root for discovery docs: 
Dec 19 12:33:38.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:34:13.664: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 19 12:34:13.664: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 19 12:34:13.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:34:13.920: INFO: stderr: ""
Dec 19 12:34:13.920: INFO: stdout: "update-demo-kitten-4px8w update-demo-kitten-sf9sh "
Dec 19 12:34:13.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4px8w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:34:14.050: INFO: stderr: ""
Dec 19 12:34:14.050: INFO: stdout: "true"
Dec 19 12:34:14.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4px8w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:34:14.198: INFO: stderr: ""
Dec 19 12:34:14.198: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 19 12:34:14.198: INFO: validating pod update-demo-kitten-4px8w
Dec 19 12:34:14.259: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 19 12:34:14.259: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 19 12:34:14.259: INFO: update-demo-kitten-4px8w is verified up and running
Dec 19 12:34:14.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sf9sh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:34:14.377: INFO: stderr: ""
Dec 19 12:34:14.378: INFO: stdout: "true"
Dec 19 12:34:14.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sf9sh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2xs7v'
Dec 19 12:34:14.479: INFO: stderr: ""
Dec 19 12:34:14.479: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 19 12:34:14.479: INFO: validating pod update-demo-kitten-sf9sh
Dec 19 12:34:14.502: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 19 12:34:14.503: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 19 12:34:14.503: INFO: update-demo-kitten-sf9sh is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:34:14.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2xs7v" for this suite.
Dec 19 12:34:38.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:34:38.720: INFO: namespace: e2e-tests-kubectl-2xs7v, resource: bindings, ignored listing per whitelist
Dec 19 12:34:38.746: INFO: namespace e2e-tests-kubectl-2xs7v deletion completed in 24.22761228s

• [SLOW TEST:80.375 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:34:38.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 19 12:34:38.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8vswf'
Dec 19 12:34:39.223: INFO: stderr: ""
Dec 19 12:34:39.223: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 19 12:34:40.925: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:40.925: INFO: Found 0 / 1
Dec 19 12:34:41.381: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:41.382: INFO: Found 0 / 1
Dec 19 12:34:42.374: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:42.374: INFO: Found 0 / 1
Dec 19 12:34:43.299: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:43.299: INFO: Found 0 / 1
Dec 19 12:34:44.296: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:44.296: INFO: Found 0 / 1
Dec 19 12:34:46.075: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:46.075: INFO: Found 0 / 1
Dec 19 12:34:46.288: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:46.288: INFO: Found 0 / 1
Dec 19 12:34:47.240: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:47.240: INFO: Found 0 / 1
Dec 19 12:34:48.268: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:48.268: INFO: Found 0 / 1
Dec 19 12:34:49.244: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:49.244: INFO: Found 0 / 1
Dec 19 12:34:50.271: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:50.271: INFO: Found 1 / 1
Dec 19 12:34:50.271: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 19 12:34:50.279: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:50.279: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 19 12:34:50.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5l6f5 --namespace=e2e-tests-kubectl-8vswf -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 19 12:34:50.477: INFO: stderr: ""
Dec 19 12:34:50.478: INFO: stdout: "pod/redis-master-5l6f5 patched\n"
STEP: checking annotations
Dec 19 12:34:50.566: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 12:34:50.567: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:34:50.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8vswf" for this suite.
Dec 19 12:35:14.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:35:14.760: INFO: namespace: e2e-tests-kubectl-8vswf, resource: bindings, ignored listing per whitelist
Dec 19 12:35:14.816: INFO: namespace e2e-tests-kubectl-8vswf deletion completed in 24.203185317s

• [SLOW TEST:36.071 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:35:14.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-01d69e12-225c-11ea-a3c6-0242ac110004
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-01d69e12-225c-11ea-a3c6-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:36:34.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7vctm" for this suite.
Dec 19 12:36:59.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:36:59.175: INFO: namespace: e2e-tests-configmap-7vctm, resource: bindings, ignored listing per whitelist
Dec 19 12:36:59.175: INFO: namespace e2e-tests-configmap-7vctm deletion completed in 24.212120426s

• [SLOW TEST:104.358 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:36:59.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:36:59.419: INFO: Creating ReplicaSet my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004
Dec 19 12:36:59.445: INFO: Pod name my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004: Found 0 pods out of 1
Dec 19 12:37:04.505: INFO: Pod name my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004: Found 1 pods out of 1
Dec 19 12:37:04.506: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004" is running
Dec 19 12:37:08.622: INFO: Pod "my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004-phzv8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 12:36:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 12:36:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 12:36:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-19 12:36:59 +0000 UTC Reason: Message:}])
Dec 19 12:37:08.623: INFO: Trying to dial the pod
Dec 19 12:37:13.726: INFO: Controller my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004: Got expected result from replica 1 [my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004-phzv8]: "my-hostname-basic-3ffe6c69-225c-11ea-a3c6-0242ac110004-phzv8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:37:13.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-b4hgs" for this suite.
Dec 19 12:37:19.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:37:19.843: INFO: namespace: e2e-tests-replicaset-b4hgs, resource: bindings, ignored listing per whitelist
Dec 19 12:37:19.968: INFO: namespace e2e-tests-replicaset-b4hgs deletion completed in 6.232619534s

• [SLOW TEST:20.792 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:37:19.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:37:20.206: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 19 12:37:25.369: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 19 12:37:33.405: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 19 12:37:33.506: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-l27hz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l27hz/deployments/test-cleanup-deployment,UID:54420169-225c-11ea-a994-fa163e34d433,ResourceVersion:15347788,Generation:1,CreationTimestamp:2019-12-19 12:37:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 19 12:37:33.532: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Dec 19 12:37:33.532: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 19 12:37:33.533: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-l27hz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l27hz/replicasets/test-cleanup-controller,UID:4c5da33e-225c-11ea-a994-fa163e34d433,ResourceVersion:15347789,Generation:1,CreationTimestamp:2019-12-19 12:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 54420169-225c-11ea-a994-fa163e34d433 0xc0023d89b7 0xc0023d89b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 19 12:37:33.574: INFO: Pod "test-cleanup-controller-974l4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-974l4,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-l27hz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-l27hz/pods/test-cleanup-controller-974l4,UID:4c61d628-225c-11ea-a994-fa163e34d433,ResourceVersion:15347785,Generation:0,CreationTimestamp:2019-12-19 12:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 4c5da33e-225c-11ea-a994-fa163e34d433 0xc001c681c7 0xc001c681c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vlbdz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlbdz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vlbdz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c682f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c68310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:37:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:37:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:37:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-19 12:37:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-19 12:37:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5209f0779035424c198c6954f605807f874542b3e2cd88443c3f8709f3202c5d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:37:33.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-l27hz" for this suite.
Dec 19 12:37:47.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:37:48.075: INFO: namespace: e2e-tests-deployment-l27hz, resource: bindings, ignored listing per whitelist
Dec 19 12:37:48.104: INFO: namespace e2e-tests-deployment-l27hz deletion completed in 14.339972914s

• [SLOW TEST:28.136 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:37:48.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 19 12:37:57.184: INFO: 10 pods remaining
Dec 19 12:37:57.184: INFO: 10 pods has nil DeletionTimestamp
Dec 19 12:37:57.184: INFO: 
Dec 19 12:37:58.485: INFO: 9 pods remaining
Dec 19 12:37:58.485: INFO: 9 pods has nil DeletionTimestamp
Dec 19 12:37:58.485: INFO: 
STEP: Gathering metrics
W1219 12:37:59.600859       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 19 12:37:59.601: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:37:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bk4w2" for this suite.
Dec 19 12:38:17.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:38:17.831: INFO: namespace: e2e-tests-gc-bk4w2, resource: bindings, ignored listing per whitelist
Dec 19 12:38:17.841: INFO: namespace e2e-tests-gc-bk4w2 deletion completed in 18.2314726s

• [SLOW TEST:29.736 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:38:17.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-6f6d13db-225c-11ea-a3c6-0242ac110004
STEP: Creating configMap with name cm-test-opt-upd-6f6d1487-225c-11ea-a3c6-0242ac110004
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6f6d13db-225c-11ea-a3c6-0242ac110004
STEP: Updating configmap cm-test-opt-upd-6f6d1487-225c-11ea-a3c6-0242ac110004
STEP: Creating configMap with name cm-test-opt-create-6f6d14cc-225c-11ea-a3c6-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:38:39.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zq49b" for this suite.
Dec 19 12:39:19.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:39:20.009: INFO: namespace: e2e-tests-projected-zq49b, resource: bindings, ignored listing per whitelist
Dec 19 12:39:20.014: INFO: namespace e2e-tests-projected-zq49b deletion completed in 40.25059179s

• [SLOW TEST:62.173 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:39:20.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:39:20.162: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 19 12:39:20.620: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 19 12:39:25.636: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 19 12:39:31.655: INFO: Creating deployment "test-rolling-update-deployment"
Dec 19 12:39:31.675: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 19 12:39:31.746: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 19 12:39:34.244: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 19 12:39:34.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355972, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:39:36.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355972, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:39:38.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355972, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:39:40.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355972, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:39:42.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355982, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712355971, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:39:44.302: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 19 12:39:44.546: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-sg9f7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sg9f7/deployments/test-rolling-update-deployment,UID:9abc3743-225c-11ea-a994-fa163e34d433,ResourceVersion:15348158,Generation:1,CreationTimestamp:2019-12-19 12:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-19 12:39:31 +0000 UTC 2019-12-19 12:39:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-19 12:39:42 +0000 UTC 2019-12-19 12:39:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 19 12:39:44.659: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-sg9f7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sg9f7/replicasets/test-rolling-update-deployment-75db98fb4c,UID:9acb12ae-225c-11ea-a994-fa163e34d433,ResourceVersion:15348149,Generation:1,CreationTimestamp:2019-12-19 12:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9abc3743-225c-11ea-a994-fa163e34d433 0xc000a1d8b7 0xc000a1d8b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 19 12:39:44.659: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 19 12:39:44.660: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-sg9f7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sg9f7/replicasets/test-rolling-update-controller,UID:93e259b4-225c-11ea-a994-fa163e34d433,ResourceVersion:15348157,Generation:2,CreationTimestamp:2019-12-19 12:39:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9abc3743-225c-11ea-a994-fa163e34d433 0xc000a1d6d7 0xc000a1d6d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 19 12:39:44.688: INFO: Pod "test-rolling-update-deployment-75db98fb4c-vzz8j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-vzz8j,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-sg9f7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sg9f7/pods/test-rolling-update-deployment-75db98fb4c-vzz8j,UID:9addf532-225c-11ea-a994-fa163e34d433,ResourceVersion:15348148,Generation:0,CreationTimestamp:2019-12-19 12:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 9acb12ae-225c-11ea-a994-fa163e34d433 0xc0015c3db7 0xc0015c3db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ps4fp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ps4fp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-ps4fp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015c3e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015c3e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:39:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:39:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:39:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:39:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-19 12:39:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-19 12:39:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://bc7a912fbfb02780b5803f1096c41098fc763377e7e2a128c7a3496dee8e43ad}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:39:44.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-sg9f7" for this suite.
Dec 19 12:39:52.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:39:52.876: INFO: namespace: e2e-tests-deployment-sg9f7, resource: bindings, ignored listing per whitelist
Dec 19 12:39:52.917: INFO: namespace e2e-tests-deployment-sg9f7 deletion completed in 8.211950254s

• [SLOW TEST:32.902 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:39:52.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a853e7ad-225c-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 12:39:54.510: INFO: Waiting up to 5m0s for pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-s7r5j" to be "success or failure"
Dec 19 12:39:54.608: INFO: Pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 97.520983ms
Dec 19 12:39:56.924: INFO: Pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413626078s
Dec 19 12:39:59.017: INFO: Pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.506564638s
Dec 19 12:40:01.530: INFO: Pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.019869981s
Dec 19 12:40:03.563: INFO: Pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.052639971s
Dec 19 12:40:05.587: INFO: Pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.076613394s
STEP: Saw pod success
Dec 19 12:40:05.587: INFO: Pod "pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:40:05.593: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 19 12:40:05.719: INFO: Waiting for pod pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:40:05.735: INFO: Pod pod-secrets-a855e52c-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:40:05.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-s7r5j" for this suite.
Dec 19 12:40:11.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:40:12.030: INFO: namespace: e2e-tests-secrets-s7r5j, resource: bindings, ignored listing per whitelist
Dec 19 12:40:12.042: INFO: namespace e2e-tests-secrets-s7r5j deletion completed in 6.297500937s

• [SLOW TEST:19.125 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:40:12.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 19 12:40:12.257: INFO: Waiting up to 5m0s for pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-wgsm8" to be "success or failure"
Dec 19 12:40:12.264: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.3564ms
Dec 19 12:40:14.476: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219423295s
Dec 19 12:40:16.502: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244785466s
Dec 19 12:40:18.790: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.53294519s
Dec 19 12:40:20.806: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549106718s
Dec 19 12:40:22.844: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.587325924s
Dec 19 12:40:25.113: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.856240389s
STEP: Saw pod success
Dec 19 12:40:25.113: INFO: Pod "downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:40:25.124: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 19 12:40:25.521: INFO: Waiting for pod downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:40:25.536: INFO: Pod downward-api-b2ed4a6b-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:40:25.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wgsm8" for this suite.
Dec 19 12:40:31.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:40:31.656: INFO: namespace: e2e-tests-downward-api-wgsm8, resource: bindings, ignored listing per whitelist
Dec 19 12:40:31.772: INFO: namespace e2e-tests-downward-api-wgsm8 deletion completed in 6.228519389s

• [SLOW TEST:19.730 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:40:31.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-4btv5/secret-test-bec54939-225c-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 12:40:32.137: INFO: Waiting up to 5m0s for pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-4btv5" to be "success or failure"
Dec 19 12:40:32.157: INFO: Pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.070845ms
Dec 19 12:40:34.300: INFO: Pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163281827s
Dec 19 12:40:36.327: INFO: Pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189393349s
Dec 19 12:40:38.909: INFO: Pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.771393315s
Dec 19 12:40:40.969: INFO: Pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.831687841s
Dec 19 12:40:43.025: INFO: Pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.888314661s
STEP: Saw pod success
Dec 19 12:40:43.026: INFO: Pod "pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:40:43.032: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004 container env-test: 
STEP: delete the pod
Dec 19 12:40:43.087: INFO: Waiting for pod pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:40:43.110: INFO: Pod pod-configmaps-bec68fde-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:40:43.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4btv5" for this suite.
Dec 19 12:40:49.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:40:49.393: INFO: namespace: e2e-tests-secrets-4btv5, resource: bindings, ignored listing per whitelist
Dec 19 12:40:49.393: INFO: namespace e2e-tests-secrets-4btv5 deletion completed in 6.273404629s

• [SLOW TEST:17.620 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:40:49.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 19 12:40:49.789: INFO: Waiting up to 5m0s for pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-var-expansion-c4gxr" to be "success or failure"
Dec 19 12:40:49.809: INFO: Pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.202803ms
Dec 19 12:40:51.866: INFO: Pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076352815s
Dec 19 12:40:53.920: INFO: Pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130222866s
Dec 19 12:40:56.565: INFO: Pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.775567651s
Dec 19 12:40:58.624: INFO: Pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834458912s
Dec 19 12:41:00.638: INFO: Pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.84880058s
STEP: Saw pod success
Dec 19 12:41:00.638: INFO: Pod "var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:41:00.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 19 12:41:00.834: INFO: Waiting for pod var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:41:00.874: INFO: Pod var-expansion-c941bdd9-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:41:00.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-c4gxr" for this suite.
Dec 19 12:41:08.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:41:08.149: INFO: namespace: e2e-tests-var-expansion-c4gxr, resource: bindings, ignored listing per whitelist
Dec 19 12:41:08.302: INFO: namespace e2e-tests-var-expansion-c4gxr deletion completed in 7.415053429s

• [SLOW TEST:18.909 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:41:08.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 19 12:41:08.631: INFO: Waiting up to 5m0s for pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-mhtnc" to be "success or failure"
Dec 19 12:41:08.642: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.092945ms
Dec 19 12:41:10.652: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020513771s
Dec 19 12:41:12.665: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033594615s
Dec 19 12:41:15.400: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.768457823s
Dec 19 12:41:17.472: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.840922158s
Dec 19 12:41:19.506: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.874595383s
Dec 19 12:41:21.529: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.897830207s
STEP: Saw pod success
Dec 19 12:41:21.529: INFO: Pod "pod-d47f8843-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:41:21.541: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d47f8843-225c-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:41:22.128: INFO: Waiting for pod pod-d47f8843-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:41:22.467: INFO: Pod pod-d47f8843-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:41:22.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mhtnc" for this suite.
Dec 19 12:41:28.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:41:28.807: INFO: namespace: e2e-tests-emptydir-mhtnc, resource: bindings, ignored listing per whitelist
Dec 19 12:41:28.817: INFO: namespace e2e-tests-emptydir-mhtnc deletion completed in 6.316251085s

• [SLOW TEST:20.514 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:41:28.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e0b59293-225c-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 12:41:29.271: INFO: Waiting up to 5m0s for pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-ctqzj" to be "success or failure"
Dec 19 12:41:29.348: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 77.301759ms
Dec 19 12:41:31.925: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.654205484s
Dec 19 12:41:33.951: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.6802494s
Dec 19 12:41:36.418: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.147009307s
Dec 19 12:41:38.526: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.255630639s
Dec 19 12:41:40.550: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.278916326s
Dec 19 12:41:43.285: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.014516568s
STEP: Saw pod success
Dec 19 12:41:43.286: INFO: Pod "pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:41:43.315: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 19 12:41:43.756: INFO: Waiting for pod pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:41:43.776: INFO: Pod pod-secrets-e0d49adf-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:41:43.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ctqzj" for this suite.
Dec 19 12:41:49.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:41:50.000: INFO: namespace: e2e-tests-secrets-ctqzj, resource: bindings, ignored listing per whitelist
Dec 19 12:41:50.037: INFO: namespace e2e-tests-secrets-ctqzj deletion completed in 6.245518934s
STEP: Destroying namespace "e2e-tests-secret-namespace-mzr66" for this suite.
Dec 19 12:41:56.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:41:56.178: INFO: namespace: e2e-tests-secret-namespace-mzr66, resource: bindings, ignored listing per whitelist
Dec 19 12:41:56.258: INFO: namespace e2e-tests-secret-namespace-mzr66 deletion completed in 6.221165583s

• [SLOW TEST:27.441 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:41:56.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 19 12:41:56.599: INFO: Waiting up to 5m0s for pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-containers-99sws" to be "success or failure"
Dec 19 12:41:56.635: INFO: Pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 35.248562ms
Dec 19 12:41:58.742: INFO: Pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14219266s
Dec 19 12:42:00.774: INFO: Pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174001534s
Dec 19 12:42:03.209: INFO: Pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.609809136s
Dec 19 12:42:05.222: INFO: Pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622079991s
Dec 19 12:42:07.274: INFO: Pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.674845336s
STEP: Saw pod success
Dec 19 12:42:07.275: INFO: Pod "client-containers-f10f6115-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:42:07.295: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f10f6115-225c-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:42:07.460: INFO: Waiting for pod client-containers-f10f6115-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:42:07.472: INFO: Pod client-containers-f10f6115-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:42:07.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-99sws" for this suite.
Dec 19 12:42:13.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:42:13.681: INFO: namespace: e2e-tests-containers-99sws, resource: bindings, ignored listing per whitelist
Dec 19 12:42:13.688: INFO: namespace e2e-tests-containers-99sws deletion completed in 6.203541094s

• [SLOW TEST:17.429 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:42:13.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 19 12:42:14.083: INFO: Waiting up to 5m0s for pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-qtrft" to be "success or failure"
Dec 19 12:42:14.111: INFO: Pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 27.797427ms
Dec 19 12:42:16.139: INFO: Pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056204001s
Dec 19 12:42:18.161: INFO: Pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077543696s
Dec 19 12:42:20.561: INFO: Pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.477672996s
Dec 19 12:42:22.611: INFO: Pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52754208s
Dec 19 12:42:24.637: INFO: Pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.553637456s
STEP: Saw pod success
Dec 19 12:42:24.637: INFO: Pod "downward-api-fb864adc-225c-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:42:24.655: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-fb864adc-225c-11ea-a3c6-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 19 12:42:24.790: INFO: Waiting for pod downward-api-fb864adc-225c-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:42:24.849: INFO: Pod downward-api-fb864adc-225c-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:42:24.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qtrft" for this suite.
Dec 19 12:42:31.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:42:31.177: INFO: namespace: e2e-tests-downward-api-qtrft, resource: bindings, ignored listing per whitelist
Dec 19 12:42:31.220: INFO: namespace e2e-tests-downward-api-qtrft deletion completed in 6.272576725s

• [SLOW TEST:17.532 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:42:31.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 19 12:42:31.435: INFO: Waiting up to 5m0s for pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-nr2br" to be "success or failure"
Dec 19 12:42:31.444: INFO: Pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.06845ms
Dec 19 12:42:33.460: INFO: Pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025753728s
Dec 19 12:42:35.478: INFO: Pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043328494s
Dec 19 12:42:37.860: INFO: Pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425340125s
Dec 19 12:42:40.398: INFO: Pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963563911s
Dec 19 12:42:42.417: INFO: Pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.98227249s
STEP: Saw pod success
Dec 19 12:42:42.417: INFO: Pod "pod-05e10f8f-225d-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:42:42.424: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-05e10f8f-225d-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 12:42:42.740: INFO: Waiting for pod pod-05e10f8f-225d-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:42:42.768: INFO: Pod pod-05e10f8f-225d-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:42:42.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nr2br" for this suite.
Dec 19 12:42:48.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:42:48.972: INFO: namespace: e2e-tests-emptydir-nr2br, resource: bindings, ignored listing per whitelist
Dec 19 12:42:49.093: INFO: namespace e2e-tests-emptydir-nr2br deletion completed in 6.232913978s

• [SLOW TEST:17.873 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:42:49.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 19 12:42:59.557: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-108c1ed5-225d-11ea-a3c6-0242ac110004", GenerateName:"", Namespace:"e2e-tests-pods-4ddss", SelfLink:"/api/v1/namespaces/e2e-tests-pods-4ddss/pods/pod-submit-remove-108c1ed5-225d-11ea-a3c6-0242ac110004", UID:"108fc437-225d-11ea-a994-fa163e34d433", ResourceVersion:"15348653", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712356169, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"314302919"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h6m5p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00168d880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h6m5p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001cc7c88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d73500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cc7cc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cc7ce0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001cc7ce8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001cc7cec)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712356169, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712356178, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712356178, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712356169, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002a92d60), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002a92d80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://2eebb227e758cff71023fdc9f1db787c861d81fa054d9aaebe45f41ee4dea680"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:43:06.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4ddss" for this suite.
Dec 19 12:43:12.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:43:12.998: INFO: namespace: e2e-tests-pods-4ddss, resource: bindings, ignored listing per whitelist
Dec 19 12:43:13.067: INFO: namespace e2e-tests-pods-4ddss deletion completed in 6.216159323s

• [SLOW TEST:23.974 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:43:13.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5tqtn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5tqtn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 19 12:43:29.489: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.497: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.508: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.520: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.528: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.538: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.545: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.559: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.564: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.569: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.575: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.584: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.595: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.599: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.603: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.606: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.610: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.613: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.617: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.620: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004: the server could not find the requested resource (get pods dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004)
Dec 19 12:43:29.620: INFO: Lookups using e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5tqtn.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 19 12:43:34.738: INFO: DNS probes using e2e-tests-dns-5tqtn/dns-test-1ed414d8-225d-11ea-a3c6-0242ac110004 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:43:34.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-5tqtn" for this suite.
Dec 19 12:43:42.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:43:43.123: INFO: namespace: e2e-tests-dns-5tqtn, resource: bindings, ignored listing per whitelist
Dec 19 12:43:43.162: INFO: namespace e2e-tests-dns-5tqtn deletion completed in 8.24442928s

• [SLOW TEST:30.094 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:43:43.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-65qkx/configmap-test-30d2bb9b-225d-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 19 12:43:43.503: INFO: Waiting up to 5m0s for pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004" in namespace "e2e-tests-configmap-65qkx" to be "success or failure"
Dec 19 12:43:43.531: INFO: Pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 27.926709ms
Dec 19 12:43:45.546: INFO: Pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043413033s
Dec 19 12:43:47.567: INFO: Pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063820314s
Dec 19 12:43:49.584: INFO: Pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081047311s
Dec 19 12:43:51.697: INFO: Pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194110174s
Dec 19 12:43:53.724: INFO: Pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.221450564s
STEP: Saw pod success
Dec 19 12:43:53.724: INFO: Pod "pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:43:53.736: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004 container env-test: 
STEP: delete the pod
Dec 19 12:43:53.926: INFO: Waiting for pod pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:43:53.940: INFO: Pod pod-configmaps-30d5ff22-225d-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:43:53.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-65qkx" for this suite.
Dec 19 12:44:00.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:44:00.117: INFO: namespace: e2e-tests-configmap-65qkx, resource: bindings, ignored listing per whitelist
Dec 19 12:44:00.201: INFO: namespace e2e-tests-configmap-65qkx deletion completed in 6.255016629s

• [SLOW TEST:17.038 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:44:00.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:44:00.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-8mn9w" to be "success or failure"
Dec 19 12:44:00.465: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 38.963739ms
Dec 19 12:44:02.669: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242493221s
Dec 19 12:44:04.681: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254916305s
Dec 19 12:44:07.427: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.000452754s
Dec 19 12:44:09.455: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.029264451s
Dec 19 12:44:11.476: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 11.049772027s
Dec 19 12:44:14.286: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.860217311s
STEP: Saw pod success
Dec 19 12:44:14.287: INFO: Pod "downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:44:14.302: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:44:14.794: INFO: Waiting for pod downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:44:14.806: INFO: Pod downwardapi-volume-3aebce2a-225d-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:44:14.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8mn9w" for this suite.
Dec 19 12:44:20.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:44:20.977: INFO: namespace: e2e-tests-downward-api-8mn9w, resource: bindings, ignored listing per whitelist
Dec 19 12:44:21.099: INFO: namespace e2e-tests-downward-api-8mn9w deletion completed in 6.278248368s

• [SLOW TEST:20.898 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:44:21.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-llv7
STEP: Creating a pod to test atomic-volume-subpath
Dec 19 12:44:21.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-llv7" in namespace "e2e-tests-subpath-vvl5b" to be "success or failure"
Dec 19 12:44:21.543: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.844717ms
Dec 19 12:44:23.581: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051654729s
Dec 19 12:44:25.624: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094567433s
Dec 19 12:44:27.888: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358365562s
Dec 19 12:44:29.915: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386005314s
Dec 19 12:44:31.956: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.426144907s
Dec 19 12:44:33.988: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.458420474s
Dec 19 12:44:36.004: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.474445228s
Dec 19 12:44:38.265: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.735906976s
Dec 19 12:44:40.283: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 18.753967729s
Dec 19 12:44:42.300: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 20.770801575s
Dec 19 12:44:44.324: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 22.794222331s
Dec 19 12:44:46.341: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 24.81174867s
Dec 19 12:44:48.358: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 26.828086047s
Dec 19 12:44:50.376: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 28.846699208s
Dec 19 12:44:52.397: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 30.867312731s
Dec 19 12:44:54.417: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 32.887245263s
Dec 19 12:44:56.695: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Running", Reason="", readiness=false. Elapsed: 35.165309754s
Dec 19 12:44:59.151: INFO: Pod "pod-subpath-test-configmap-llv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.621829006s
STEP: Saw pod success
Dec 19 12:44:59.151: INFO: Pod "pod-subpath-test-configmap-llv7" satisfied condition "success or failure"
Dec 19 12:44:59.171: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-llv7 container test-container-subpath-configmap-llv7: 
STEP: delete the pod
Dec 19 12:44:59.776: INFO: Waiting for pod pod-subpath-test-configmap-llv7 to disappear
Dec 19 12:44:59.800: INFO: Pod pod-subpath-test-configmap-llv7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-llv7
Dec 19 12:44:59.800: INFO: Deleting pod "pod-subpath-test-configmap-llv7" in namespace "e2e-tests-subpath-vvl5b"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:44:59.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vvl5b" for this suite.
Dec 19 12:45:07.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:45:08.048: INFO: namespace: e2e-tests-subpath-vvl5b, resource: bindings, ignored listing per whitelist
Dec 19 12:45:08.053: INFO: namespace e2e-tests-subpath-vvl5b deletion completed in 8.228406585s

• [SLOW TEST:46.954 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:45:08.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-x5bpd
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-x5bpd
STEP: Deleting pre-stop pod
Dec 19 12:45:35.500: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:45:35.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-x5bpd" for this suite.
Dec 19 12:46:17.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:46:17.980: INFO: namespace: e2e-tests-prestop-x5bpd, resource: bindings, ignored listing per whitelist
Dec 19 12:46:17.985: INFO: namespace e2e-tests-prestop-x5bpd deletion completed in 42.3274623s

• [SLOW TEST:69.931 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:46:17.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fjnpt
Dec 19 12:46:30.204: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fjnpt
STEP: checking the pod's current state and verifying that restartCount is present
Dec 19 12:46:30.212: INFO: Initial restart count of pod liveness-exec is 0
Dec 19 12:47:24.828: INFO: Restart count of pod e2e-tests-container-probe-fjnpt/liveness-exec is now 1 (54.616516006s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:47:24.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fjnpt" for this suite.
Dec 19 12:47:32.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:47:33.202: INFO: namespace: e2e-tests-container-probe-fjnpt, resource: bindings, ignored listing per whitelist
Dec 19 12:47:33.221: INFO: namespace e2e-tests-container-probe-fjnpt deletion completed in 8.285533559s

• [SLOW TEST:75.236 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:47:33.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-b9ee4cb1-225d-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 19 12:47:33.529: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-dvvf2" to be "success or failure"
Dec 19 12:47:33.544: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.698813ms
Dec 19 12:47:35.560: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030737762s
Dec 19 12:47:37.584: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054882706s
Dec 19 12:47:39.815: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.286017698s
Dec 19 12:47:42.077: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547611661s
Dec 19 12:47:44.098: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.569362736s
Dec 19 12:47:46.119: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.589725212s
STEP: Saw pod success
Dec 19 12:47:46.119: INFO: Pod "pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:47:46.123: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 19 12:47:46.484: INFO: Waiting for pod pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:47:46.498: INFO: Pod pod-projected-configmaps-b9f11c09-225d-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:47:46.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dvvf2" for this suite.
Dec 19 12:47:52.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:47:52.693: INFO: namespace: e2e-tests-projected-dvvf2, resource: bindings, ignored listing per whitelist
Dec 19 12:47:52.738: INFO: namespace e2e-tests-projected-dvvf2 deletion completed in 6.226706513s

• [SLOW TEST:19.516 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:47:52.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c5aaf5a5-225d-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 19 12:47:53.226: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-9cfcg" to be "success or failure"
Dec 19 12:47:53.234: INFO: Pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49296ms
Dec 19 12:47:55.427: INFO: Pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201272409s
Dec 19 12:47:57.446: INFO: Pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220163215s
Dec 19 12:47:59.922: INFO: Pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695953368s
Dec 19 12:48:02.289: INFO: Pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.063453344s
Dec 19 12:48:04.497: INFO: Pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.271564313s
STEP: Saw pod success
Dec 19 12:48:04.497: INFO: Pod "pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:48:04.520: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 19 12:48:04.872: INFO: Waiting for pod pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:48:04.933: INFO: Pod pod-projected-configmaps-c5ae7d28-225d-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:48:04.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9cfcg" for this suite.
Dec 19 12:48:10.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:48:11.078: INFO: namespace: e2e-tests-projected-9cfcg, resource: bindings, ignored listing per whitelist
Dec 19 12:48:11.145: INFO: namespace e2e-tests-projected-9cfcg deletion completed in 6.20001605s

• [SLOW TEST:18.407 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:48:11.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 19 12:48:11.401: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-2bd4k" to be "success or failure"
Dec 19 12:48:11.437: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 36.657495ms
Dec 19 12:48:13.732: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330871036s
Dec 19 12:48:15.758: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357637704s
Dec 19 12:48:17.816: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415046359s
Dec 19 12:48:19.852: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450698809s
Dec 19 12:48:22.264: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.863287516s
Dec 19 12:48:24.310: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.909560401s
Dec 19 12:48:26.332: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.93080569s
STEP: Saw pod success
Dec 19 12:48:26.332: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 19 12:48:26.404: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 19 12:48:26.559: INFO: Waiting for pod pod-host-path-test to disappear
Dec 19 12:48:26.578: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:48:26.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-2bd4k" for this suite.
Dec 19 12:48:34.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:48:34.803: INFO: namespace: e2e-tests-hostpath-2bd4k, resource: bindings, ignored listing per whitelist
Dec 19 12:48:34.908: INFO: namespace e2e-tests-hostpath-2bd4k deletion completed in 8.314018206s

• [SLOW TEST:23.763 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:48:34.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 19 12:48:35.036: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 19 12:48:35.080: INFO: Waiting for terminating namespaces to be deleted...
Dec 19 12:48:35.086: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 19 12:48:35.113: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 19 12:48:35.113: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 19 12:48:35.113: INFO: 	Container coredns ready: true, restart count 0
Dec 19 12:48:35.113: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 19 12:48:35.113: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 19 12:48:35.113: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 19 12:48:35.113: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 19 12:48:35.113: INFO: 	Container weave ready: true, restart count 0
Dec 19 12:48:35.113: INFO: 	Container weave-npc ready: true, restart count 0
Dec 19 12:48:35.113: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 19 12:48:35.113: INFO: 	Container coredns ready: true, restart count 0
Dec 19 12:48:35.113: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 19 12:48:35.113: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e1c69b5e42162f], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:48:36.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-q9kn2" for this suite.
Dec 19 12:48:42.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:48:42.399: INFO: namespace: e2e-tests-sched-pred-q9kn2, resource: bindings, ignored listing per whitelist
Dec 19 12:48:42.423: INFO: namespace e2e-tests-sched-pred-q9kn2 deletion completed in 6.236933994s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.514 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:48:42.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1219 12:48:59.547852       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 19 12:48:59.548: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:48:59.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vj9xl" for this suite.
Dec 19 12:49:19.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:49:19.806: INFO: namespace: e2e-tests-gc-vj9xl, resource: bindings, ignored listing per whitelist
Dec 19 12:49:19.819: INFO: namespace e2e-tests-gc-vj9xl deletion completed in 20.262152576s

• [SLOW TEST:37.395 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:49:19.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 19 12:49:40.432: INFO: Successfully updated pod "labelsupdatefb50c046-225d-11ea-a3c6-0242ac110004"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:49:42.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cwms7" for this suite.
Dec 19 12:50:06.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:50:06.863: INFO: namespace: e2e-tests-downward-api-cwms7, resource: bindings, ignored listing per whitelist
Dec 19 12:50:06.919: INFO: namespace e2e-tests-downward-api-cwms7 deletion completed in 24.262857404s

• [SLOW TEST:47.100 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:50:06.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-157deef1-225e-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 12:50:07.150: INFO: Waiting up to 5m0s for pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-h75fd" to be "success or failure"
Dec 19 12:50:07.169: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.889642ms
Dec 19 12:50:09.470: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320547406s
Dec 19 12:50:11.502: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351837165s
Dec 19 12:50:13.844: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.693984082s
Dec 19 12:50:15.868: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.718043979s
Dec 19 12:50:17.952: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.802493952s
Dec 19 12:50:20.245: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.095168959s
STEP: Saw pod success
Dec 19 12:50:20.245: INFO: Pod "pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:50:20.464: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 19 12:50:20.852: INFO: Waiting for pod pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:50:20.884: INFO: Pod pod-secrets-157eafe7-225e-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:50:20.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-h75fd" for this suite.
Dec 19 12:50:29.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:50:29.225: INFO: namespace: e2e-tests-secrets-h75fd, resource: bindings, ignored listing per whitelist
Dec 19 12:50:29.266: INFO: namespace e2e-tests-secrets-h75fd deletion completed in 8.359374806s

• [SLOW TEST:22.347 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:50:29.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 19 12:50:42.113: INFO: Successfully updated pod "annotationupdate22d0548d-225e-11ea-a3c6-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:50:44.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hr6bm" for this suite.
Dec 19 12:51:08.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:51:08.619: INFO: namespace: e2e-tests-projected-hr6bm, resource: bindings, ignored listing per whitelist
Dec 19 12:51:08.619: INFO: namespace e2e-tests-projected-hr6bm deletion completed in 24.387656229s

• [SLOW TEST:39.352 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:51:08.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:51:09.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-j4gjz" for this suite.
Dec 19 12:51:15.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:51:15.509: INFO: namespace: e2e-tests-kubelet-test-j4gjz, resource: bindings, ignored listing per whitelist
Dec 19 12:51:15.648: INFO: namespace e2e-tests-kubelet-test-j4gjz deletion completed in 6.285475015s

• [SLOW TEST:7.028 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:51:15.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mqjm6
Dec 19 12:51:27.971: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mqjm6
STEP: checking the pod's current state and verifying that restartCount is present
Dec 19 12:51:27.978: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:55:28.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mqjm6" for this suite.
Dec 19 12:55:35.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:55:35.190: INFO: namespace: e2e-tests-container-probe-mqjm6, resource: bindings, ignored listing per whitelist
Dec 19 12:55:35.204: INFO: namespace e2e-tests-container-probe-mqjm6 deletion completed in 6.368316493s

• [SLOW TEST:259.556 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:55:35.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-b72zd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 19 12:55:35.480: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 19 12:56:24.384: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-b72zd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 12:56:24.384: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 12:56:25.830: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:56:25.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-b72zd" for this suite.
Dec 19 12:56:49.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:56:50.203: INFO: namespace: e2e-tests-pod-network-test-b72zd, resource: bindings, ignored listing per whitelist
Dec 19 12:56:50.241: INFO: namespace e2e-tests-pod-network-test-b72zd deletion completed in 24.392217537s

• [SLOW TEST:75.037 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:56:50.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-05e732b9-225f-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 12:56:50.552: INFO: Waiting up to 5m0s for pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-secrets-4xql2" to be "success or failure"
Dec 19 12:56:50.657: INFO: Pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 105.237746ms
Dec 19 12:56:52.959: INFO: Pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406747165s
Dec 19 12:56:54.972: INFO: Pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419727904s
Dec 19 12:56:57.436: INFO: Pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.884577141s
Dec 19 12:56:59.465: INFO: Pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.913522542s
Dec 19 12:57:01.481: INFO: Pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.929565655s
STEP: Saw pod success
Dec 19 12:57:01.481: INFO: Pod "pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:57:01.494: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004 container secret-env-test: 
STEP: delete the pod
Dec 19 12:57:01.666: INFO: Waiting for pod pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:57:01.684: INFO: Pod pod-secrets-05eacef2-225f-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:57:01.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4xql2" for this suite.
Dec 19 12:57:08.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:57:08.699: INFO: namespace: e2e-tests-secrets-4xql2, resource: bindings, ignored listing per whitelist
Dec 19 12:57:08.733: INFO: namespace e2e-tests-secrets-4xql2 deletion completed in 7.039868321s

• [SLOW TEST:18.492 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:57:08.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 19 12:57:21.554: INFO: Successfully updated pod "labelsupdate10ebb13b-225f-11ea-a3c6-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:57:23.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kz9jf" for this suite.
Dec 19 12:57:49.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:57:49.794: INFO: namespace: e2e-tests-projected-kz9jf, resource: bindings, ignored listing per whitelist
Dec 19 12:57:49.932: INFO: namespace e2e-tests-projected-kz9jf deletion completed in 26.272094624s

• [SLOW TEST:41.198 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:57:49.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 12:57:50.264: INFO: Creating deployment "test-recreate-deployment"
Dec 19 12:57:50.276: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 19 12:57:50.290: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 19 12:57:52.440: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 19 12:57:52.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:57:54.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:57:57.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:57:58.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:58:00.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357070, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 12:58:02.659: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 19 12:58:02.720: INFO: Updating deployment test-recreate-deployment
Dec 19 12:58:02.720: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 19 12:58:03.466: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-kddc9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kddc9/deployments/test-recreate-deployment,UID:298ece66-225f-11ea-a994-fa163e34d433,ResourceVersion:15350374,Generation:2,CreationTimestamp:2019-12-19 12:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-19 12:58:03 +0000 UTC 2019-12-19 12:58:03 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-19 12:58:03 +0000 UTC 2019-12-19 12:57:50 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 19 12:58:03.517: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-kddc9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kddc9/replicasets/test-recreate-deployment-589c4bfd,UID:313407fa-225f-11ea-a994-fa163e34d433,ResourceVersion:15350373,Generation:1,CreationTimestamp:2019-12-19 12:58:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 298ece66-225f-11ea-a994-fa163e34d433 0xc0015c2cff 0xc0015c2d10}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 19 12:58:03.517: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 19 12:58:03.517: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-kddc9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kddc9/replicasets/test-recreate-deployment-5bf7f65dc,UID:2992c876-225f-11ea-a994-fa163e34d433,ResourceVersion:15350363,Generation:2,CreationTimestamp:2019-12-19 12:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 298ece66-225f-11ea-a994-fa163e34d433 0xc0015c3050 0xc0015c3051}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 19 12:58:03.525: INFO: Pod "test-recreate-deployment-589c4bfd-p8d4w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-p8d4w,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-kddc9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kddc9/pods/test-recreate-deployment-589c4bfd-p8d4w,UID:313619e5-225f-11ea-a994-fa163e34d433,ResourceVersion:15350375,Generation:0,CreationTimestamp:2019-12-19 12:58:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 313407fa-225f-11ea-a994-fa163e34d433 0xc0015c3ddf 0xc0015c3df0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rq96x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rq96x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rq96x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015c3e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015c3e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:58:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:58:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:58:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 12:58:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-19 12:58:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:58:03.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-kddc9" for this suite.
Dec 19 12:58:12.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:58:12.688: INFO: namespace: e2e-tests-deployment-kddc9, resource: bindings, ignored listing per whitelist
Dec 19 12:58:12.696: INFO: namespace e2e-tests-deployment-kddc9 deletion completed in 9.050586879s

• [SLOW TEST:22.765 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:58:12.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 19 12:58:13.043: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:58:35.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wtz5l" for this suite.
Dec 19 12:58:59.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:59:00.140: INFO: namespace: e2e-tests-init-container-wtz5l, resource: bindings, ignored listing per whitelist
Dec 19 12:59:00.360: INFO: namespace e2e-tests-init-container-wtz5l deletion completed in 24.48799508s

• [SLOW TEST:47.663 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:59:00.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:59:00.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-jf5vm" to be "success or failure"
Dec 19 12:59:00.698: INFO: Pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.625628ms
Dec 19 12:59:02.798: INFO: Pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113978582s
Dec 19 12:59:04.815: INFO: Pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131352494s
Dec 19 12:59:07.894: INFO: Pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.210146533s
Dec 19 12:59:09.922: INFO: Pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.238577351s
Dec 19 12:59:11.938: INFO: Pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.254230147s
STEP: Saw pod success
Dec 19 12:59:11.938: INFO: Pod "downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:59:11.946: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:59:14.177: INFO: Waiting for pod downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:59:14.225: INFO: Pod downwardapi-volume-5384bf7f-225f-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:59:14.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jf5vm" for this suite.
Dec 19 12:59:20.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:59:20.401: INFO: namespace: e2e-tests-projected-jf5vm, resource: bindings, ignored listing per whitelist
Dec 19 12:59:20.611: INFO: namespace e2e-tests-projected-jf5vm deletion completed in 6.298422347s

• [SLOW TEST:20.251 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:59:20.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 19 12:59:20.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-dsf2n" to be "success or failure"
Dec 19 12:59:20.939: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 108.588139ms
Dec 19 12:59:23.460: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.630097987s
Dec 19 12:59:25.474: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.643856519s
Dec 19 12:59:27.595: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.765133575s
Dec 19 12:59:29.644: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.813730839s
Dec 19 12:59:31.723: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.893044382s
Dec 19 12:59:33.754: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.924112078s
STEP: Saw pod success
Dec 19 12:59:33.755: INFO: Pod "downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 12:59:34.571: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004 container client-container: 
STEP: delete the pod
Dec 19 12:59:34.940: INFO: Waiting for pod downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004 to disappear
Dec 19 12:59:35.014: INFO: Pod downwardapi-volume-5f837107-225f-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 12:59:35.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dsf2n" for this suite.
Dec 19 12:59:41.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 12:59:41.453: INFO: namespace: e2e-tests-projected-dsf2n, resource: bindings, ignored listing per whitelist
Dec 19 12:59:41.520: INFO: namespace e2e-tests-projected-dsf2n deletion completed in 6.429457747s

• [SLOW TEST:20.909 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 12:59:41.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-dmr4
STEP: Creating a pod to test atomic-volume-subpath
Dec 19 12:59:41.935: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dmr4" in namespace "e2e-tests-subpath-5qn8r" to be "success or failure"
Dec 19 12:59:41.954: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.072625ms
Dec 19 12:59:44.727: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.791723003s
Dec 19 12:59:46.751: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.815905051s
Dec 19 12:59:49.998: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06254711s
Dec 19 12:59:52.101: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165457677s
Dec 19 12:59:54.122: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.186810775s
Dec 19 12:59:56.139: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.203516413s
Dec 19 12:59:58.150: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.214027246s
Dec 19 13:00:00.165: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.22959051s
Dec 19 13:00:02.185: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 20.249256269s
Dec 19 13:00:04.204: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 22.268626151s
Dec 19 13:00:06.220: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 24.284822645s
Dec 19 13:00:08.270: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 26.334108295s
Dec 19 13:00:10.285: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 28.349370563s
Dec 19 13:00:12.307: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 30.371769142s
Dec 19 13:00:14.316: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 32.380359667s
Dec 19 13:00:16.342: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 34.406277999s
Dec 19 13:00:18.394: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Running", Reason="", readiness=false. Elapsed: 36.458905189s
Dec 19 13:00:22.135: INFO: Pod "pod-subpath-test-projected-dmr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.199578118s
STEP: Saw pod success
Dec 19 13:00:22.135: INFO: Pod "pod-subpath-test-projected-dmr4" satisfied condition "success or failure"
Dec 19 13:00:22.148: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-dmr4 container test-container-subpath-projected-dmr4: 
STEP: delete the pod
Dec 19 13:00:22.498: INFO: Waiting for pod pod-subpath-test-projected-dmr4 to disappear
Dec 19 13:00:22.600: INFO: Pod pod-subpath-test-projected-dmr4 no longer exists
STEP: Deleting pod pod-subpath-test-projected-dmr4
Dec 19 13:00:22.600: INFO: Deleting pod "pod-subpath-test-projected-dmr4" in namespace "e2e-tests-subpath-5qn8r"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:00:22.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-5qn8r" for this suite.
Dec 19 13:00:30.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:00:30.933: INFO: namespace: e2e-tests-subpath-5qn8r, resource: bindings, ignored listing per whitelist
Dec 19 13:00:31.002: INFO: namespace e2e-tests-subpath-5qn8r deletion completed in 8.351530939s

• [SLOW TEST:49.481 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:00:31.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 13:00:31.451: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 19 13:00:31.505: INFO: Number of nodes with available pods: 0
Dec 19 13:00:31.505: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 19 13:00:31.727: INFO: Number of nodes with available pods: 0
Dec 19 13:00:31.727: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:33.764: INFO: Number of nodes with available pods: 0
Dec 19 13:00:33.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:34.761: INFO: Number of nodes with available pods: 0
Dec 19 13:00:34.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:35.747: INFO: Number of nodes with available pods: 0
Dec 19 13:00:35.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:36.756: INFO: Number of nodes with available pods: 0
Dec 19 13:00:36.756: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:37.739: INFO: Number of nodes with available pods: 0
Dec 19 13:00:37.739: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:38.743: INFO: Number of nodes with available pods: 0
Dec 19 13:00:38.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:39.747: INFO: Number of nodes with available pods: 0
Dec 19 13:00:39.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:40.752: INFO: Number of nodes with available pods: 0
Dec 19 13:00:40.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:41.748: INFO: Number of nodes with available pods: 0
Dec 19 13:00:41.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:42.767: INFO: Number of nodes with available pods: 0
Dec 19 13:00:42.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:43.780: INFO: Number of nodes with available pods: 1
Dec 19 13:00:43.780: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 19 13:00:43.972: INFO: Number of nodes with available pods: 1
Dec 19 13:00:43.972: INFO: Number of running nodes: 0, number of available pods: 1
Dec 19 13:00:44.986: INFO: Number of nodes with available pods: 0
Dec 19 13:00:44.986: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 19 13:00:45.011: INFO: Number of nodes with available pods: 0
Dec 19 13:00:45.011: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:46.024: INFO: Number of nodes with available pods: 0
Dec 19 13:00:46.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:48.467: INFO: Number of nodes with available pods: 0
Dec 19 13:00:48.467: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:49.046: INFO: Number of nodes with available pods: 0
Dec 19 13:00:49.046: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:50.040: INFO: Number of nodes with available pods: 0
Dec 19 13:00:50.040: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:51.363: INFO: Number of nodes with available pods: 0
Dec 19 13:00:51.363: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:52.031: INFO: Number of nodes with available pods: 0
Dec 19 13:00:52.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:53.100: INFO: Number of nodes with available pods: 0
Dec 19 13:00:53.100: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:54.058: INFO: Number of nodes with available pods: 0
Dec 19 13:00:54.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:55.024: INFO: Number of nodes with available pods: 0
Dec 19 13:00:55.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:56.046: INFO: Number of nodes with available pods: 0
Dec 19 13:00:56.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:57.025: INFO: Number of nodes with available pods: 0
Dec 19 13:00:57.025: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:58.078: INFO: Number of nodes with available pods: 0
Dec 19 13:00:58.078: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:00:59.023: INFO: Number of nodes with available pods: 0
Dec 19 13:00:59.023: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:00.031: INFO: Number of nodes with available pods: 0
Dec 19 13:01:00.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:01.033: INFO: Number of nodes with available pods: 0
Dec 19 13:01:01.033: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:02.031: INFO: Number of nodes with available pods: 0
Dec 19 13:01:02.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:03.182: INFO: Number of nodes with available pods: 0
Dec 19 13:01:03.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:04.801: INFO: Number of nodes with available pods: 0
Dec 19 13:01:04.801: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:05.375: INFO: Number of nodes with available pods: 0
Dec 19 13:01:05.375: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:06.227: INFO: Number of nodes with available pods: 0
Dec 19 13:01:06.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:07.040: INFO: Number of nodes with available pods: 0
Dec 19 13:01:07.040: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:08.037: INFO: Number of nodes with available pods: 0
Dec 19 13:01:08.037: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:09.664: INFO: Number of nodes with available pods: 0
Dec 19 13:01:09.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:10.205: INFO: Number of nodes with available pods: 0
Dec 19 13:01:10.205: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:11.024: INFO: Number of nodes with available pods: 0
Dec 19 13:01:11.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:12.024: INFO: Number of nodes with available pods: 0
Dec 19 13:01:12.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:13.030: INFO: Number of nodes with available pods: 0
Dec 19 13:01:13.030: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 19 13:01:14.039: INFO: Number of nodes with available pods: 1
Dec 19 13:01:14.039: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vsw9c, will wait for the garbage collector to delete the pods
Dec 19 13:01:14.150: INFO: Deleting DaemonSet.extensions daemon-set took: 16.77131ms
Dec 19 13:01:14.350: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.473483ms
Dec 19 13:01:32.673: INFO: Number of nodes with available pods: 0
Dec 19 13:01:32.674: INFO: Number of running nodes: 0, number of available pods: 0
Dec 19 13:01:32.691: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vsw9c/daemonsets","resourceVersion":"15350814"},"items":null}

Dec 19 13:01:32.712: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vsw9c/pods","resourceVersion":"15350814"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:01:32.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vsw9c" for this suite.
Dec 19 13:01:40.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:01:40.994: INFO: namespace: e2e-tests-daemonsets-vsw9c, resource: bindings, ignored listing per whitelist
Dec 19 13:01:41.023: INFO: namespace e2e-tests-daemonsets-vsw9c deletion completed in 8.230711683s

• [SLOW TEST:70.021 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:01:41.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gxrnj in namespace e2e-tests-proxy-vpg68
I1219 13:01:41.358632       9 runners.go:184] Created replication controller with name: proxy-service-gxrnj, namespace: e2e-tests-proxy-vpg68, replica count: 1
I1219 13:01:42.410119       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:43.411330       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:44.412042       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:45.412351       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:46.412626       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:47.413053       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:48.413344       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:49.413970       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:50.414798       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:51.415229       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:52.415653       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:53.415971       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1219 13:01:54.416353       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1219 13:01:55.416636       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1219 13:01:56.417000       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1219 13:01:57.417451       9 runners.go:184] proxy-service-gxrnj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 19 13:01:57.517: INFO: setup took 16.227941698s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 19 13:01:57.548: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vpg68/pods/proxy-service-gxrnj-d649t:160/proxy/: foo (200; 29.577952ms)
Dec 19 13:01:57.548: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vpg68/pods/http:proxy-service-gxrnj-d649t:160/proxy/: foo (200; 30.619473ms)
Dec 19 13:01:57.549: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vpg68/pods/proxy-service-gxrnj-d649t:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1219 13:02:24.765847       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 19 13:02:24.765: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:02:24.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-mcdvl" for this suite.
Dec 19 13:02:31.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:02:31.230: INFO: namespace: e2e-tests-gc-mcdvl, resource: bindings, ignored listing per whitelist
Dec 19 13:02:31.337: INFO: namespace e2e-tests-gc-mcdvl deletion completed in 6.564163019s

• [SLOW TEST:17.775 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:02:31.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 19 13:02:31.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dt2pd'
Dec 19 13:02:33.838: INFO: stderr: ""
Dec 19 13:02:33.839: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 19 13:02:33.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-dt2pd'
Dec 19 13:02:43.307: INFO: stderr: ""
Dec 19 13:02:43.308: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:02:43.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dt2pd" for this suite.
Dec 19 13:02:49.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:02:49.469: INFO: namespace: e2e-tests-kubectl-dt2pd, resource: bindings, ignored listing per whitelist
Dec 19 13:02:49.664: INFO: namespace e2e-tests-kubectl-dt2pd deletion completed in 6.345547446s

• [SLOW TEST:18.326 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:02:49.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-dc4c28a6-225f-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 13:02:50.235: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-6vbxt" to be "success or failure"
Dec 19 13:02:50.368: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 132.792489ms
Dec 19 13:02:52.636: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400733955s
Dec 19 13:02:54.684: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449620482s
Dec 19 13:02:56.693: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458483723s
Dec 19 13:02:59.234: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.998764407s
Dec 19 13:03:01.300: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.064719068s
Dec 19 13:03:04.126: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.891614749s
STEP: Saw pod success
Dec 19 13:03:04.127: INFO: Pod "pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 13:03:04.136: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 19 13:03:04.545: INFO: Waiting for pod pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004 to disappear
Dec 19 13:03:04.579: INFO: Pod pod-projected-secrets-dc511857-225f-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:03:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6vbxt" for this suite.
Dec 19 13:03:12.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:03:12.813: INFO: namespace: e2e-tests-projected-6vbxt, resource: bindings, ignored listing per whitelist
Dec 19 13:03:13.073: INFO: namespace e2e-tests-projected-6vbxt deletion completed in 8.472198851s

• [SLOW TEST:23.409 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:03:13.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-t4qn
STEP: Creating a pod to test atomic-volume-subpath
Dec 19 13:03:13.728: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-t4qn" in namespace "e2e-tests-subpath-7sf8m" to be "success or failure"
Dec 19 13:03:13.784: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 55.437828ms
Dec 19 13:03:16.372: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.644018509s
Dec 19 13:03:18.413: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.684976841s
Dec 19 13:03:20.433: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704310575s
Dec 19 13:03:22.460: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.731963008s
Dec 19 13:03:24.492: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763751315s
Dec 19 13:03:26.542: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.81339509s
Dec 19 13:03:28.565: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.836646185s
Dec 19 13:03:30.923: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 17.194807502s
Dec 19 13:03:32.949: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.220823327s
Dec 19 13:03:34.970: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 21.241464733s
Dec 19 13:03:36.989: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 23.260472559s
Dec 19 13:03:39.087: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Running", Reason="", readiness=false. Elapsed: 25.359054742s
Dec 19 13:03:41.240: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Running", Reason="", readiness=false. Elapsed: 27.512147122s
Dec 19 13:03:43.259: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Running", Reason="", readiness=false. Elapsed: 29.530952961s
Dec 19 13:03:45.279: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Running", Reason="", readiness=false. Elapsed: 31.551085361s
Dec 19 13:03:47.299: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Running", Reason="", readiness=false. Elapsed: 33.570828131s
Dec 19 13:03:49.327: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Running", Reason="", readiness=false. Elapsed: 35.599266279s
Dec 19 13:03:52.011: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Running", Reason="", readiness=false. Elapsed: 38.282355019s
Dec 19 13:03:54.029: INFO: Pod "pod-subpath-test-secret-t4qn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.301155786s
STEP: Saw pod success
Dec 19 13:03:54.029: INFO: Pod "pod-subpath-test-secret-t4qn" satisfied condition "success or failure"
Dec 19 13:03:54.035: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-t4qn container test-container-subpath-secret-t4qn: 
STEP: delete the pod
Dec 19 13:03:55.199: INFO: Waiting for pod pod-subpath-test-secret-t4qn to disappear
Dec 19 13:03:55.366: INFO: Pod pod-subpath-test-secret-t4qn no longer exists
STEP: Deleting pod pod-subpath-test-secret-t4qn
Dec 19 13:03:55.367: INFO: Deleting pod "pod-subpath-test-secret-t4qn" in namespace "e2e-tests-subpath-7sf8m"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:03:55.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-7sf8m" for this suite.
Dec 19 13:04:01.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:04:01.516: INFO: namespace: e2e-tests-subpath-7sf8m, resource: bindings, ignored listing per whitelist
Dec 19 13:04:01.629: INFO: namespace e2e-tests-subpath-7sf8m deletion completed in 6.239335505s

• [SLOW TEST:48.556 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:04:01.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 13:04:01.973: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 19 13:04:06.992: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 19 13:04:13.017: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 19 13:04:15.029: INFO: Creating deployment "test-rollover-deployment"
Dec 19 13:04:15.046: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 19 13:04:17.448: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 19 13:04:17.469: INFO: Ensure that both replica sets have 1 created replica
Dec 19 13:04:17.850: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 19 13:04:17.914: INFO: Updating deployment test-rollover-deployment
Dec 19 13:04:17.915: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 19 13:04:20.290: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 19 13:04:20.301: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 19 13:04:20.309: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:20.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:22.875: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:22.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:24.331: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:24.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:26.341: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:26.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:29.802: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:29.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:30.390: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:30.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:32.336: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:32.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:34.342: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:34.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357472, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:36.385: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:36.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357472, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:38.475: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:38.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357472, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:40.327: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:40.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357472, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:42.353: INFO: all replica sets need to contain the pod-template-hash label
Dec 19 13:04:42.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357472, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712357455, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 19 13:04:44.701: INFO: 
Dec 19 13:04:44.702: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 19 13:04:44.950: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-jldzt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jldzt/deployments/test-rollover-deployment,UID:0ee513fd-2260-11ea-a994-fa163e34d433,ResourceVersion:15351287,Generation:2,CreationTimestamp:2019-12-19 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-19 13:04:15 +0000 UTC 2019-12-19 13:04:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-19 13:04:43 +0000 UTC 2019-12-19 13:04:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 19 13:04:44.969: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-jldzt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jldzt/replicasets/test-rollover-deployment-5b8479fdb6,UID:10a04a6f-2260-11ea-a994-fa163e34d433,ResourceVersion:15351278,Generation:2,CreationTimestamp:2019-12-19 13:04:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0ee513fd-2260-11ea-a994-fa163e34d433 0xc0012a0ac7 0xc0012a0ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 19 13:04:44.969: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 19 13:04:44.969: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-jldzt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jldzt/replicasets/test-rollover-controller,UID:070962fe-2260-11ea-a994-fa163e34d433,ResourceVersion:15351286,Generation:2,CreationTimestamp:2019-12-19 13:04:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0ee513fd-2260-11ea-a994-fa163e34d433 0xc0012a07f7 0xc0012a07f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 19 13:04:44.970: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-jldzt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jldzt/replicasets/test-rollover-deployment-58494b7559,UID:0eec2c4a-2260-11ea-a994-fa163e34d433,ResourceVersion:15351239,Generation:2,CreationTimestamp:2019-12-19 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0ee513fd-2260-11ea-a994-fa163e34d433 0xc0012a09e7 0xc0012a09e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 19 13:04:45.077: INFO: Pod "test-rollover-deployment-5b8479fdb6-vhlmp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-vhlmp,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-jldzt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jldzt/pods/test-rollover-deployment-5b8479fdb6-vhlmp,UID:11332b0c-2260-11ea-a994-fa163e34d433,ResourceVersion:15351263,Generation:0,CreationTimestamp:2019-12-19 13:04:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 10a04a6f-2260-11ea-a994-fa163e34d433 0xc0015c3237 0xc0015c3238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6lsnn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6lsnn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6lsnn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015c32a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015c32c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 13:04:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 13:04:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 13:04:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-19 13:04:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-19 13:04:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-19 13:04:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a94bff783d11350023e901dbbe784df84687f34efbd120524297ce0a68f69a56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:04:45.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jldzt" for this suite.
Dec 19 13:04:59.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:04:59.220: INFO: namespace: e2e-tests-deployment-jldzt, resource: bindings, ignored listing per whitelist
Dec 19 13:04:59.352: INFO: namespace e2e-tests-deployment-jldzt deletion completed in 14.258246024s

• [SLOW TEST:57.722 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:04:59.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 19 13:05:35.946: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:35.947: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:36.632: INFO: Exec stderr: ""
Dec 19 13:05:36.632: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:36.632: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:37.115: INFO: Exec stderr: ""
Dec 19 13:05:37.115: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:37.115: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:37.386: INFO: Exec stderr: ""
Dec 19 13:05:37.386: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:37.386: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:37.730: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 19 13:05:37.731: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:37.731: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:38.094: INFO: Exec stderr: ""
Dec 19 13:05:38.094: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:38.094: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:38.741: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 19 13:05:38.741: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:38.741: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:39.179: INFO: Exec stderr: ""
Dec 19 13:05:39.179: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:39.179: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:39.534: INFO: Exec stderr: ""
Dec 19 13:05:39.534: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:39.534: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:39.835: INFO: Exec stderr: ""
Dec 19 13:05:39.835: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k78hn PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 19 13:05:39.835: INFO: >>> kubeConfig: /root/.kube/config
Dec 19 13:05:40.157: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:05:40.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-k78hn" for this suite.
Dec 19 13:06:34.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:06:34.297: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-k78hn, resource: bindings, ignored listing per whitelist
Dec 19 13:06:34.314: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-k78hn deletion completed in 54.142482011s

• [SLOW TEST:94.962 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:06:34.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1219 13:07:05.685906       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 19 13:07:05.686: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:07:05.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-g6ltz" for this suite.
Dec 19 13:07:16.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:07:17.235: INFO: namespace: e2e-tests-gc-g6ltz, resource: bindings, ignored listing per whitelist
Dec 19 13:07:17.369: INFO: namespace e2e-tests-gc-g6ltz deletion completed in 11.677938982s

• [SLOW TEST:43.055 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:07:17.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:07:37.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-7sld2" for this suite.
Dec 19 13:08:03.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:08:03.247: INFO: namespace: e2e-tests-replication-controller-7sld2, resource: bindings, ignored listing per whitelist
Dec 19 13:08:04.894: INFO: namespace e2e-tests-replication-controller-7sld2 deletion completed in 27.789332158s

• [SLOW TEST:47.524 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:08:04.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 19 13:08:05.282: INFO: Waiting up to 5m0s for pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004" in namespace "e2e-tests-downward-api-j5zh6" to be "success or failure"
Dec 19 13:08:05.463: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 181.310376ms
Dec 19 13:08:07.949: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66709219s
Dec 19 13:08:09.964: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.682480118s
Dec 19 13:08:13.553: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270908735s
Dec 19 13:08:15.567: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.285516017s
Dec 19 13:08:17.579: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.297232606s
Dec 19 13:08:19.599: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.31691812s
Dec 19 13:08:21.614: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.332741459s
STEP: Saw pod success
Dec 19 13:08:21.615: INFO: Pod "downward-api-981d2536-2260-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 13:08:21.620: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-981d2536-2260-11ea-a3c6-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 19 13:08:22.005: INFO: Waiting for pod downward-api-981d2536-2260-11ea-a3c6-0242ac110004 to disappear
Dec 19 13:08:22.016: INFO: Pod downward-api-981d2536-2260-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:08:22.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j5zh6" for this suite.
Dec 19 13:08:28.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:08:28.130: INFO: namespace: e2e-tests-downward-api-j5zh6, resource: bindings, ignored listing per whitelist
Dec 19 13:08:28.189: INFO: namespace e2e-tests-downward-api-j5zh6 deletion completed in 6.162467192s

• [SLOW TEST:23.294 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:08:28.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:08:38.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-6jtfr" for this suite.
Dec 19 13:09:24.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:09:24.598: INFO: namespace: e2e-tests-kubelet-test-6jtfr, resource: bindings, ignored listing per whitelist
Dec 19 13:09:24.679: INFO: namespace e2e-tests-kubelet-test-6jtfr deletion completed in 46.196965584s

• [SLOW TEST:56.489 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:09:24.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 19 13:09:25.050: INFO: Waiting up to 5m0s for pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-8qh6v" to be "success or failure"
Dec 19 13:09:25.065: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.296962ms
Dec 19 13:09:27.414: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364523081s
Dec 19 13:09:29.445: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394807536s
Dec 19 13:09:31.552: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501980463s
Dec 19 13:09:33.589: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538885154s
Dec 19 13:09:35.603: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.553497795s
Dec 19 13:09:37.924: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.874004324s
STEP: Saw pod success
Dec 19 13:09:37.924: INFO: Pod "pod-c7abf6f9-2260-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 13:09:37.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c7abf6f9-2260-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 13:09:38.699: INFO: Waiting for pod pod-c7abf6f9-2260-11ea-a3c6-0242ac110004 to disappear
Dec 19 13:09:38.814: INFO: Pod pod-c7abf6f9-2260-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:09:38.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8qh6v" for this suite.
Dec 19 13:09:46.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:09:47.193: INFO: namespace: e2e-tests-emptydir-8qh6v, resource: bindings, ignored listing per whitelist
Dec 19 13:09:47.207: INFO: namespace e2e-tests-emptydir-8qh6v deletion completed in 8.341211512s

• [SLOW TEST:22.527 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:09:47.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 19 13:09:47.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5zxrb'
Dec 19 13:09:47.718: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 19 13:09:47.718: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 19 13:09:49.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-5zxrb'
Dec 19 13:09:50.429: INFO: stderr: ""
Dec 19 13:09:50.429: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:09:50.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5zxrb" for this suite.
Dec 19 13:09:57.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:09:57.646: INFO: namespace: e2e-tests-kubectl-5zxrb, resource: bindings, ignored listing per whitelist
Dec 19 13:09:57.681: INFO: namespace e2e-tests-kubectl-5zxrb deletion completed in 7.175569234s

• [SLOW TEST:10.474 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:09:57.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-db42728c-2260-11ea-a3c6-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 19 13:09:58.032: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004" in namespace "e2e-tests-projected-cfk88" to be "success or failure"
Dec 19 13:09:58.038: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.396737ms
Dec 19 13:10:00.063: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030423139s
Dec 19 13:10:02.081: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048478913s
Dec 19 13:10:04.226: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193407954s
Dec 19 13:10:06.966: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.933937706s
Dec 19 13:10:09.013: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.980255491s
Dec 19 13:10:11.139: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.106872101s
Dec 19 13:10:13.155: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.122958842s
STEP: Saw pod success
Dec 19 13:10:13.155: INFO: Pod "pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 13:10:13.160: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 19 13:10:14.585: INFO: Waiting for pod pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004 to disappear
Dec 19 13:10:14.714: INFO: Pod pod-projected-secrets-db44fd80-2260-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:10:14.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cfk88" for this suite.
Dec 19 13:10:20.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:10:21.053: INFO: namespace: e2e-tests-projected-cfk88, resource: bindings, ignored listing per whitelist
Dec 19 13:10:21.092: INFO: namespace e2e-tests-projected-cfk88 deletion completed in 6.341190683s

• [SLOW TEST:23.411 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:10:21.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 19 13:10:21.565: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 19 13:10:21.582: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-588vp/daemonsets","resourceVersion":"15351983"},"items":null}

Dec 19 13:10:21.669: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-588vp/pods","resourceVersion":"15351983"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:10:21.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-588vp" for this suite.
Dec 19 13:10:27.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:10:28.096: INFO: namespace: e2e-tests-daemonsets-588vp, resource: bindings, ignored listing per whitelist
Dec 19 13:10:28.105: INFO: namespace e2e-tests-daemonsets-588vp deletion completed in 6.404178758s

S [SKIPPING] [7.013 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 19 13:10:21.565: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:10:28.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 19 13:10:28.331: INFO: Waiting up to 5m0s for pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004" in namespace "e2e-tests-emptydir-c9qpx" to be "success or failure"
Dec 19 13:10:28.368: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.540664ms
Dec 19 13:10:30.385: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054642991s
Dec 19 13:10:32.421: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089895414s
Dec 19 13:10:35.652: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.321535259s
Dec 19 13:10:37.688: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.356990214s
Dec 19 13:10:39.963: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.63254827s
Dec 19 13:10:42.018: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.687793396s
STEP: Saw pod success
Dec 19 13:10:42.019: INFO: Pod "pod-ed63a4cd-2260-11ea-a3c6-0242ac110004" satisfied condition "success or failure"
Dec 19 13:10:42.031: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ed63a4cd-2260-11ea-a3c6-0242ac110004 container test-container: 
STEP: delete the pod
Dec 19 13:10:42.269: INFO: Waiting for pod pod-ed63a4cd-2260-11ea-a3c6-0242ac110004 to disappear
Dec 19 13:10:42.294: INFO: Pod pod-ed63a4cd-2260-11ea-a3c6-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:10:42.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-c9qpx" for this suite.
Dec 19 13:10:48.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:10:48.436: INFO: namespace: e2e-tests-emptydir-c9qpx, resource: bindings, ignored listing per whitelist
Dec 19 13:10:48.588: INFO: namespace e2e-tests-emptydir-c9qpx deletion completed in 6.285948881s

• [SLOW TEST:20.483 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:10:48.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 19 13:10:48.924: INFO: namespace e2e-tests-kubectl-g7n2d
Dec 19 13:10:48.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-g7n2d'
Dec 19 13:10:49.339: INFO: stderr: ""
Dec 19 13:10:49.339: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 19 13:10:50.576: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:50.576: INFO: Found 0 / 1
Dec 19 13:10:51.348: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:51.348: INFO: Found 0 / 1
Dec 19 13:10:52.393: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:52.393: INFO: Found 0 / 1
Dec 19 13:10:53.354: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:53.354: INFO: Found 0 / 1
Dec 19 13:10:54.349: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:54.349: INFO: Found 0 / 1
Dec 19 13:10:55.781: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:55.781: INFO: Found 0 / 1
Dec 19 13:10:56.377: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:56.377: INFO: Found 0 / 1
Dec 19 13:10:57.358: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:57.358: INFO: Found 0 / 1
Dec 19 13:10:58.354: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:58.354: INFO: Found 0 / 1
Dec 19 13:10:59.362: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:10:59.363: INFO: Found 0 / 1
Dec 19 13:11:00.354: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:11:00.354: INFO: Found 1 / 1
Dec 19 13:11:00.354: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 19 13:11:00.361: INFO: Selector matched 1 pods for map[app:redis]
Dec 19 13:11:00.361: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 19 13:11:00.361: INFO: wait on redis-master startup in e2e-tests-kubectl-g7n2d 
Dec 19 13:11:00.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-fl8vr redis-master --namespace=e2e-tests-kubectl-g7n2d'
Dec 19 13:11:00.688: INFO: stderr: ""
Dec 19 13:11:00.688: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 19 Dec 13:10:58.255 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Dec 13:10:58.255 # Server started, Redis version 3.2.12\n1:M 19 Dec 13:10:58.256 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Dec 13:10:58.256 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 19 13:11:00.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-g7n2d'
Dec 19 13:11:00.961: INFO: stderr: ""
Dec 19 13:11:00.961: INFO: stdout: "service/rm2 exposed\n"
Dec 19 13:11:00.971: INFO: Service rm2 in namespace e2e-tests-kubectl-g7n2d found.
STEP: exposing service
Dec 19 13:11:02.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-g7n2d'
Dec 19 13:11:03.200: INFO: stderr: ""
Dec 19 13:11:03.200: INFO: stdout: "service/rm3 exposed\n"
Dec 19 13:11:03.259: INFO: Service rm3 in namespace e2e-tests-kubectl-g7n2d found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:11:05.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g7n2d" for this suite.
Dec 19 13:11:33.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:11:33.673: INFO: namespace: e2e-tests-kubectl-g7n2d, resource: bindings, ignored listing per whitelist
Dec 19 13:11:33.867: INFO: namespace e2e-tests-kubectl-g7n2d deletion completed in 28.57776506s

• [SLOW TEST:45.278 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:11:33.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:11:34.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bbmxx" for this suite.
Dec 19 13:11:58.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:11:59.049: INFO: namespace: e2e-tests-pods-bbmxx, resource: bindings, ignored listing per whitelist
Dec 19 13:11:59.057: INFO: namespace e2e-tests-pods-bbmxx deletion completed in 24.486939018s

• [SLOW TEST:25.189 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 19 13:11:59.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-llghj
Dec 19 13:12:11.439: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-llghj
STEP: checking the pod's current state and verifying that restartCount is present
Dec 19 13:12:11.445: INFO: Initial restart count of pod liveness-http is 0
Dec 19 13:12:35.947: INFO: Restart count of pod e2e-tests-container-probe-llghj/liveness-http is now 1 (24.502361191s elapsed)
Dec 19 13:12:55.173: INFO: Restart count of pod e2e-tests-container-probe-llghj/liveness-http is now 2 (43.728319644s elapsed)
Dec 19 13:13:17.475: INFO: Restart count of pod e2e-tests-container-probe-llghj/liveness-http is now 3 (1m6.03055219s elapsed)
Dec 19 13:13:38.282: INFO: Restart count of pod e2e-tests-container-probe-llghj/liveness-http is now 4 (1m26.837554768s elapsed)
Dec 19 13:14:40.364: INFO: Restart count of pod e2e-tests-container-probe-llghj/liveness-http is now 5 (2m28.919149477s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 19 13:14:40.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-llghj" for this suite.
Dec 19 13:14:46.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 19 13:14:46.747: INFO: namespace: e2e-tests-container-probe-llghj, resource: bindings, ignored listing per whitelist
Dec 19 13:14:46.769: INFO: namespace e2e-tests-container-probe-llghj deletion completed in 6.221202774s

• [SLOW TEST:167.712 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SDec 19 13:14:46.770: INFO: Running AfterSuite actions on all nodes
Dec 19 13:14:46.770: INFO: Running AfterSuite actions on node 1
Dec 19 13:14:46.770: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8853.198 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS