I0128 10:47:27.199325 8 e2e.go:224] Starting e2e run "927b946b-41bb-11ea-a04a-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580208445 - Will randomize all specs Will run 201 of 2164 specs Jan 28 10:47:27.435: INFO: >>> kubeConfig: /root/.kube/config Jan 28 10:47:27.438: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 28 10:47:27.458: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 28 10:47:27.530: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 28 10:47:27.530: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 28 10:47:27.530: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 28 10:47:27.539: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 28 10:47:27.540: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 28 10:47:27.540: INFO: e2e test version: v1.13.12 Jan 28 10:47:27.541: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:47:27.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook Jan 28 10:47:27.667: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 28 10:47:47.903: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:47:47.967: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:47:49.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:47:50.277: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:47:51.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:47:52.043: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:47:53.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:47:53.992: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:47:55.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:47:55.987: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:47:57.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:47:57.982: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:47:59.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:47:59.994: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:48:01.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:48:01.992: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 10:48:03.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 10:48:03.991: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:48:03.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lht2v" for this suite. Jan 28 10:48:28.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:48:28.175: INFO: namespace: e2e-tests-container-lifecycle-hook-lht2v, resource: bindings, ignored listing per whitelist Jan 28 10:48:28.215: INFO: namespace e2e-tests-container-lifecycle-hook-lht2v deletion completed in 24.209078841s • [SLOW TEST:60.674 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:48:28.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 10:48:28.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 28 10:48:28.556: INFO: stderr: "" Jan 28 10:48:28.556: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 28 10:48:28.567: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:48:28.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9rh88" for this suite. Jan 28 10:48:34.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:48:34.698: INFO: namespace: e2e-tests-kubectl-9rh88, resource: bindings, ignored listing per whitelist Jan 28 10:48:34.849: INFO: namespace e2e-tests-kubectl-9rh88 deletion completed in 6.262343296s S [SKIPPING] [6.634 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 10:48:28.567: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:48:34.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:49:29.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-7wcpz" for this suite. Jan 28 10:49:36.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:49:36.612: INFO: namespace: e2e-tests-container-runtime-7wcpz, resource: bindings, ignored listing per whitelist Jan 28 10:49:36.869: INFO: namespace e2e-tests-container-runtime-7wcpz deletion completed in 6.682100012s • [SLOW TEST:62.020 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:49:36.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 28 10:49:47.756: INFO: Successfully updated pod "annotationupdatee0912725-41bb-11ea-a04a-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:49:49.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-znbzq" for this suite. Jan 28 10:50:13.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:50:14.059: INFO: namespace: e2e-tests-downward-api-znbzq, resource: bindings, ignored listing per whitelist Jan 28 10:50:14.187: INFO: namespace e2e-tests-downward-api-znbzq deletion completed in 24.298027917s • [SLOW TEST:37.318 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:50:14.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f6d753f1-41bb-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 10:50:14.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-5dkrn" to be "success or failure" Jan 28 10:50:14.497: INFO: Pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.847325ms Jan 28 10:50:16.544: INFO: Pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077356437s Jan 28 10:50:18.571: INFO: Pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104644119s Jan 28 10:50:20.746: INFO: Pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279379249s Jan 28 10:50:22.788: INFO: Pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322172983s Jan 28 10:50:24.807: INFO: Pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.340791224s STEP: Saw pod success Jan 28 10:50:24.807: INFO: Pod "pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 10:50:24.819: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 28 10:50:25.042: INFO: Waiting for pod pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005 to disappear Jan 28 10:50:25.879: INFO: Pod pod-projected-configmaps-f6d873a0-41bb-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:50:25.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5dkrn" for this suite. Jan 28 10:50:32.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:50:32.511: INFO: namespace: e2e-tests-projected-5dkrn, resource: bindings, ignored listing per whitelist Jan 28 10:50:32.566: INFO: namespace e2e-tests-projected-5dkrn deletion completed in 6.666842346s • [SLOW TEST:18.379 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:50:32.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 28 10:50:33.004: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h444t,SelfLink:/api/v1/namespaces/e2e-tests-watch-h444t/configmaps/e2e-watch-test-label-changed,UID:01e447b9-41bc-11ea-a994-fa163e34d433,ResourceVersion:19733677,Generation:0,CreationTimestamp:2020-01-28 10:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 28 10:50:33.004: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h444t,SelfLink:/api/v1/namespaces/e2e-tests-watch-h444t/configmaps/e2e-watch-test-label-changed,UID:01e447b9-41bc-11ea-a994-fa163e34d433,ResourceVersion:19733679,Generation:0,CreationTimestamp:2020-01-28 10:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 28 10:50:33.004: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h444t,SelfLink:/api/v1/namespaces/e2e-tests-watch-h444t/configmaps/e2e-watch-test-label-changed,UID:01e447b9-41bc-11ea-a994-fa163e34d433,ResourceVersion:19733680,Generation:0,CreationTimestamp:2020-01-28 10:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 28 10:50:43.114: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h444t,SelfLink:/api/v1/namespaces/e2e-tests-watch-h444t/configmaps/e2e-watch-test-label-changed,UID:01e447b9-41bc-11ea-a994-fa163e34d433,ResourceVersion:19733693,Generation:0,CreationTimestamp:2020-01-28 10:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 28 10:50:43.114: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h444t,SelfLink:/api/v1/namespaces/e2e-tests-watch-h444t/configmaps/e2e-watch-test-label-changed,UID:01e447b9-41bc-11ea-a994-fa163e34d433,ResourceVersion:19733694,Generation:0,CreationTimestamp:2020-01-28 10:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 28 10:50:43.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h444t,SelfLink:/api/v1/namespaces/e2e-tests-watch-h444t/configmaps/e2e-watch-test-label-changed,UID:01e447b9-41bc-11ea-a994-fa163e34d433,ResourceVersion:19733695,Generation:0,CreationTimestamp:2020-01-28 10:50:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:50:43.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-h444t" for this suite. Jan 28 10:50:49.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:50:49.419: INFO: namespace: e2e-tests-watch-h444t, resource: bindings, ignored listing per whitelist Jan 28 10:50:49.440: INFO: namespace e2e-tests-watch-h444t deletion completed in 6.307247862s • [SLOW TEST:16.872 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:50:49.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-wsxm STEP: Creating a pod to test atomic-volume-subpath Jan 28 10:50:49.749: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wsxm" in namespace "e2e-tests-subpath-gtwh5" to be "success or failure" Jan 28 10:50:49.785: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 35.807693ms Jan 28 10:50:52.090: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34054614s Jan 28 10:50:54.105: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355949611s Jan 28 10:50:56.237: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.487393124s Jan 28 10:50:58.248: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499121004s Jan 28 10:51:00.260: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.511045464s Jan 28 10:51:02.284: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.534755617s Jan 28 10:51:04.299: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.550002128s Jan 28 10:51:06.317: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.568106852s Jan 28 10:51:08.335: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 18.586072441s Jan 28 10:51:10.352: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 20.602946642s Jan 28 10:51:12.371: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 22.621475303s Jan 28 10:51:14.382: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 24.633147713s Jan 28 10:51:16.398: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 26.648531443s Jan 28 10:51:18.416: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 28.666772116s Jan 28 10:51:20.442: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 30.692925117s Jan 28 10:51:22.484: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Running", Reason="", readiness=false. Elapsed: 32.73512774s Jan 28 10:51:24.513: INFO: Pod "pod-subpath-test-configmap-wsxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.764222634s STEP: Saw pod success Jan 28 10:51:24.514: INFO: Pod "pod-subpath-test-configmap-wsxm" satisfied condition "success or failure" Jan 28 10:51:24.519: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-wsxm container test-container-subpath-configmap-wsxm: STEP: delete the pod Jan 28 10:51:24.819: INFO: Waiting for pod pod-subpath-test-configmap-wsxm to disappear Jan 28 10:51:24.836: INFO: Pod pod-subpath-test-configmap-wsxm no longer exists STEP: Deleting pod pod-subpath-test-configmap-wsxm Jan 28 10:51:24.837: INFO: Deleting pod "pod-subpath-test-configmap-wsxm" in namespace "e2e-tests-subpath-gtwh5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:51:24.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-gtwh5" for this suite. Jan 28 10:51:30.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:51:31.024: INFO: namespace: e2e-tests-subpath-gtwh5, resource: bindings, ignored listing per whitelist Jan 28 10:51:31.117: INFO: namespace e2e-tests-subpath-gtwh5 deletion completed in 6.263139138s • [SLOW TEST:41.677 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:51:31.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-24b0c376-41bc-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 10:51:31.403: INFO: Waiting up to 5m0s for pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-wffcz" to be "success or failure" Jan 28 10:51:31.432: INFO: Pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.255371ms Jan 28 10:51:33.452: INFO: Pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049524991s Jan 28 10:51:35.481: INFO: Pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078145947s Jan 28 10:51:38.174: INFO: Pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.770853739s Jan 28 10:51:40.190: INFO: Pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.786730219s Jan 28 10:51:42.237: INFO: Pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.833746332s STEP: Saw pod success Jan 28 10:51:42.238: INFO: Pod "pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 10:51:42.276: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 28 10:51:42.416: INFO: Waiting for pod pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005 to disappear Jan 28 10:51:42.424: INFO: Pod pod-configmaps-24b1f339-41bc-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:51:42.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wffcz" for this suite. Jan 28 10:51:48.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:51:48.650: INFO: namespace: e2e-tests-configmap-wffcz, resource: bindings, ignored listing per whitelist Jan 28 10:51:48.678: INFO: namespace e2e-tests-configmap-wffcz deletion completed in 6.246357732s • [SLOW TEST:17.561 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:51:48.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 28 10:51:48.796: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:52:04.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2vdpp" for this suite. Jan 28 10:52:11.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:52:11.267: INFO: namespace: e2e-tests-init-container-2vdpp, resource: bindings, ignored listing per whitelist Jan 28 10:52:11.329: INFO: namespace e2e-tests-init-container-2vdpp deletion completed in 6.551761501s • [SLOW TEST:22.651 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:52:11.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-n52wr [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-n52wr STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-n52wr Jan 28 10:52:11.674: INFO: Found 0 stateful pods, waiting for 1 Jan 28 10:52:21.686: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 28 10:52:21.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 10:52:22.373: INFO: stderr: "I0128 10:52:21.979746 61 log.go:172] (0xc00070a2c0) (0xc000734640) Create stream\nI0128 10:52:21.980018 61 log.go:172] (0xc00070a2c0) (0xc000734640) Stream added, broadcasting: 1\nI0128 10:52:21.985860 61 log.go:172] (0xc00070a2c0) Reply frame received for 1\nI0128 10:52:21.985951 61 log.go:172] (0xc00070a2c0) (0xc0005eed20) Create stream\nI0128 10:52:21.985965 61 log.go:172] (0xc00070a2c0) (0xc0005eed20) Stream added, broadcasting: 3\nI0128 10:52:21.988979 61 log.go:172] (0xc00070a2c0) Reply frame received for 3\nI0128 10:52:21.989027 61 log.go:172] (0xc00070a2c0) (0xc0007346e0) Create stream\nI0128 10:52:21.989046 61 log.go:172] (0xc00070a2c0) (0xc0007346e0) Stream added, broadcasting: 5\nI0128 10:52:21.990251 61 log.go:172] (0xc00070a2c0) Reply frame received for 5\nI0128 10:52:22.209301 61 log.go:172] (0xc00070a2c0) Data frame received for 3\nI0128 10:52:22.209413 61 log.go:172] (0xc0005eed20) (3) Data frame handling\nI0128 10:52:22.209471 61 log.go:172] (0xc0005eed20) (3) Data frame sent\nI0128 10:52:22.355568 61 log.go:172] (0xc00070a2c0) Data frame received for 1\nI0128 10:52:22.355649 61 log.go:172] (0xc00070a2c0) (0xc0005eed20) Stream removed, broadcasting: 3\nI0128 10:52:22.355723 61 log.go:172] (0xc000734640) (1) Data frame handling\nI0128 10:52:22.355761 61 log.go:172] (0xc000734640) (1) Data frame sent\nI0128 10:52:22.355840 61 log.go:172] (0xc00070a2c0) (0xc0007346e0) Stream removed, broadcasting: 5\nI0128 10:52:22.355913 61 log.go:172] (0xc00070a2c0) (0xc000734640) Stream removed, broadcasting: 1\nI0128 10:52:22.355956 61 log.go:172] (0xc00070a2c0) Go away received\nI0128 10:52:22.356809 61 log.go:172] (0xc00070a2c0) (0xc000734640) Stream removed, broadcasting: 1\nI0128 10:52:22.356843 61 log.go:172] (0xc00070a2c0) (0xc0005eed20) Stream removed, broadcasting: 3\nI0128 10:52:22.356859 61 log.go:172] (0xc00070a2c0) (0xc0007346e0) Stream removed, broadcasting: 5\n" Jan 28 10:52:22.373: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 10:52:22.373: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 10:52:22.388: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 28 10:52:32.405: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 10:52:32.405: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 10:52:32.622: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999501s Jan 28 10:52:33.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.814889804s Jan 28 10:52:34.665: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.785742715s Jan 28 10:52:35.678: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.772216599s Jan 28 10:52:36.767: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.75893305s Jan 28 10:52:37.805: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.669526787s Jan 28 10:52:38.819: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.631228124s Jan 28 10:52:39.880: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.617246537s Jan 28 10:52:40.897: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.556952376s Jan 28 10:52:41.923: INFO: Verifying statefulset ss doesn't scale past 1 for another 539.624561ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-n52wr Jan 28 10:52:42.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 10:52:43.660: INFO: stderr: "I0128 10:52:43.242917 84 log.go:172] (0xc00015a6e0) (0xc0006494a0) Create stream\nI0128 10:52:43.243158 84 log.go:172] (0xc00015a6e0) (0xc0006494a0) Stream added, broadcasting: 1\nI0128 10:52:43.249137 84 log.go:172] (0xc00015a6e0) Reply frame received for 1\nI0128 10:52:43.249192 84 log.go:172] (0xc00015a6e0) (0xc000732000) Create stream\nI0128 10:52:43.249208 84 log.go:172] (0xc00015a6e0) (0xc000732000) Stream added, broadcasting: 3\nI0128 10:52:43.250195 84 log.go:172] (0xc00015a6e0) Reply frame received for 3\nI0128 10:52:43.250222 84 log.go:172] (0xc00015a6e0) (0xc000732140) Create stream\nI0128 10:52:43.250231 84 log.go:172] (0xc00015a6e0) (0xc000732140) Stream added, broadcasting: 5\nI0128 10:52:43.253244 84 log.go:172] (0xc00015a6e0) Reply frame received for 5\nI0128 10:52:43.402846 84 log.go:172] (0xc00015a6e0) Data frame received for 3\nI0128 10:52:43.402920 84 log.go:172] (0xc000732000) (3) Data frame handling\nI0128 10:52:43.402944 84 log.go:172] (0xc000732000) (3) Data frame sent\nI0128 10:52:43.636578 84 log.go:172] (0xc00015a6e0) (0xc000732000) Stream removed, broadcasting: 3\nI0128 10:52:43.637701 84 log.go:172] (0xc00015a6e0) Data frame received for 1\nI0128 10:52:43.638139 84 log.go:172] (0xc00015a6e0) (0xc000732140) Stream removed, broadcasting: 5\nI0128 10:52:43.638360 84 log.go:172] (0xc0006494a0) (1) Data frame handling\nI0128 10:52:43.638407 84 log.go:172] (0xc0006494a0) (1) Data frame sent\nI0128 10:52:43.638439 84 log.go:172] (0xc00015a6e0) (0xc0006494a0) Stream removed, broadcasting: 1\nI0128 10:52:43.638479 84 log.go:172] (0xc00015a6e0) Go away received\nI0128 10:52:43.640104 84 log.go:172] (0xc00015a6e0) (0xc0006494a0) Stream removed, broadcasting: 1\nI0128 10:52:43.640152 84 log.go:172] (0xc00015a6e0) (0xc000732000) Stream removed, broadcasting: 3\nI0128 10:52:43.640177 84 log.go:172] (0xc00015a6e0) (0xc000732140) Stream removed, broadcasting: 5\n" Jan 28 10:52:43.661: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 28 10:52:43.661: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 28 10:52:43.715: INFO: Found 1 stateful pods, waiting for 3 Jan 28 10:52:53.758: INFO: Found 2 stateful pods, waiting for 3 Jan 28 10:53:03.739: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 10:53:03.739: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 10:53:03.739: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 28 10:53:13.746: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 10:53:13.747: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 10:53:13.747: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 28 10:53:13.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 10:53:14.629: INFO: stderr: "I0128 10:53:14.197656 106 log.go:172] (0xc00015c6e0) (0xc0007c2640) Create stream\nI0128 10:53:14.198317 106 log.go:172] (0xc00015c6e0) (0xc0007c2640) Stream added, broadcasting: 1\nI0128 10:53:14.215453 106 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0128 10:53:14.215748 106 log.go:172] (0xc00015c6e0) (0xc0007c26e0) Create stream\nI0128 10:53:14.215786 106 log.go:172] (0xc00015c6e0) (0xc0007c26e0) Stream added, broadcasting: 3\nI0128 10:53:14.218667 106 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0128 10:53:14.218758 106 log.go:172] (0xc00015c6e0) (0xc0007c2780) Create stream\nI0128 10:53:14.218771 106 log.go:172] (0xc00015c6e0) (0xc0007c2780) Stream added, broadcasting: 5\nI0128 10:53:14.219709 106 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0128 10:53:14.360579 106 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0128 10:53:14.360720 106 log.go:172] (0xc0007c26e0) (3) Data frame handling\nI0128 10:53:14.360765 106 log.go:172] (0xc0007c26e0) (3) Data frame sent\nI0128 10:53:14.610426 106 log.go:172] (0xc00015c6e0) (0xc0007c26e0) Stream removed, broadcasting: 3\nI0128 10:53:14.610953 106 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0128 10:53:14.611115 106 log.go:172] (0xc00015c6e0) (0xc0007c2780) Stream removed, broadcasting: 5\nI0128 10:53:14.611168 106 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0128 10:53:14.611203 106 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0128 10:53:14.611217 106 log.go:172] (0xc00015c6e0) (0xc0007c2640) Stream removed, broadcasting: 1\nI0128 10:53:14.611232 106 log.go:172] (0xc00015c6e0) Go away received\nI0128 10:53:14.612258 106 log.go:172] (0xc00015c6e0) (0xc0007c2640) Stream removed, broadcasting: 1\nI0128 10:53:14.612425 106 log.go:172] (0xc00015c6e0) (0xc0007c26e0) Stream removed, broadcasting: 3\nI0128 10:53:14.612437 106 log.go:172] (0xc00015c6e0) (0xc0007c2780) Stream removed, broadcasting: 5\n" Jan 28 10:53:14.629: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 10:53:14.629: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 10:53:14.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 10:53:15.265: INFO: stderr: "I0128 10:53:15.027132 128 log.go:172] (0xc00071a370) (0xc0005c5540) Create stream\nI0128 10:53:15.027292 128 log.go:172] (0xc00071a370) (0xc0005c5540) Stream added, broadcasting: 1\nI0128 10:53:15.031031 128 log.go:172] (0xc00071a370) Reply frame received for 1\nI0128 10:53:15.031086 128 log.go:172] (0xc00071a370) (0xc00066a000) Create stream\nI0128 10:53:15.031094 128 log.go:172] (0xc00071a370) (0xc00066a000) Stream added, broadcasting: 3\nI0128 10:53:15.031856 128 log.go:172] (0xc00071a370) Reply frame received for 3\nI0128 10:53:15.031877 128 log.go:172] (0xc00071a370) (0xc00069e000) Create stream\nI0128 10:53:15.031887 128 log.go:172] (0xc00071a370) (0xc00069e000) Stream added, broadcasting: 5\nI0128 10:53:15.032578 128 log.go:172] (0xc00071a370) Reply frame received for 5\nI0128 10:53:15.149886 128 log.go:172] (0xc00071a370) Data frame received for 3\nI0128 10:53:15.149953 128 log.go:172] (0xc00066a000) (3) Data frame handling\nI0128 10:53:15.149982 128 log.go:172] (0xc00066a000) (3) Data frame sent\nI0128 10:53:15.256494 128 log.go:172] (0xc00071a370) Data frame received for 1\nI0128 10:53:15.257138 128 log.go:172] (0xc00071a370) (0xc00066a000) Stream removed, broadcasting: 3\nI0128 10:53:15.257268 128 log.go:172] (0xc0005c5540) (1) Data frame handling\nI0128 10:53:15.257331 128 log.go:172] (0xc0005c5540) (1) Data frame sent\nI0128 10:53:15.257535 128 log.go:172] (0xc00071a370) (0xc00069e000) Stream removed, broadcasting: 5\nI0128 10:53:15.257754 128 log.go:172] (0xc00071a370) (0xc0005c5540) Stream removed, broadcasting: 1\nI0128 10:53:15.257790 128 log.go:172] (0xc00071a370) Go away received\nI0128 10:53:15.258575 128 log.go:172] (0xc00071a370) (0xc0005c5540) Stream removed, broadcasting: 1\nI0128 10:53:15.258600 128 log.go:172] (0xc00071a370) (0xc00066a000) Stream removed, broadcasting: 3\nI0128 10:53:15.258617 128 log.go:172] (0xc00071a370) (0xc00069e000) Stream removed, broadcasting: 5\n" Jan 28 10:53:15.266: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 10:53:15.266: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 10:53:15.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 10:53:15.739: INFO: stderr: "I0128 10:53:15.425259 150 log.go:172] (0xc0008682c0) (0xc0005b12c0) Create stream\nI0128 10:53:15.425468 150 log.go:172] (0xc0008682c0) (0xc0005b12c0) Stream added, broadcasting: 1\nI0128 10:53:15.429277 150 log.go:172] (0xc0008682c0) Reply frame received for 1\nI0128 10:53:15.429311 150 log.go:172] (0xc0008682c0) (0xc000448000) Create stream\nI0128 10:53:15.429321 150 log.go:172] (0xc0008682c0) (0xc000448000) Stream added, broadcasting: 3\nI0128 10:53:15.429990 150 log.go:172] (0xc0008682c0) Reply frame received for 3\nI0128 10:53:15.430002 150 log.go:172] (0xc0008682c0) (0xc0005b1360) Create stream\nI0128 10:53:15.430006 150 log.go:172] (0xc0008682c0) (0xc0005b1360) Stream added, broadcasting: 5\nI0128 10:53:15.430704 150 log.go:172] (0xc0008682c0) Reply frame received for 5\nI0128 10:53:15.620415 150 log.go:172] (0xc0008682c0) Data frame received for 3\nI0128 10:53:15.620606 150 log.go:172] (0xc000448000) (3) Data frame handling\nI0128 10:53:15.620672 150 log.go:172] (0xc000448000) (3) Data frame sent\nI0128 10:53:15.727945 150 log.go:172] (0xc0008682c0) (0xc000448000) Stream removed, broadcasting: 3\nI0128 10:53:15.728062 150 log.go:172] (0xc0008682c0) Data frame received for 1\nI0128 10:53:15.728083 150 log.go:172] (0xc0005b12c0) (1) Data frame handling\nI0128 10:53:15.728112 150 log.go:172] (0xc0005b12c0) (1) Data frame sent\nI0128 10:53:15.728133 150 log.go:172] (0xc0008682c0) (0xc0005b1360) Stream removed, broadcasting: 5\nI0128 10:53:15.728188 150 log.go:172] (0xc0008682c0) (0xc0005b12c0) Stream removed, broadcasting: 1\nI0128 10:53:15.728216 150 log.go:172] (0xc0008682c0) Go away received\nI0128 10:53:15.728957 150 log.go:172] (0xc0008682c0) (0xc0005b12c0) Stream removed, broadcasting: 1\nI0128 10:53:15.728983 150 log.go:172] (0xc0008682c0) (0xc000448000) Stream removed, broadcasting: 3\nI0128 10:53:15.728990 150 log.go:172] (0xc0008682c0) (0xc0005b1360) Stream removed, broadcasting: 5\n" Jan 28 10:53:15.739: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 10:53:15.740: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 10:53:15.740: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 10:53:15.786: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 28 10:53:25.826: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 10:53:25.826: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 28 10:53:25.826: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 28 10:53:25.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999476s Jan 28 10:53:26.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975702344s Jan 28 10:53:27.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.960478653s Jan 28 10:53:28.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932147526s Jan 28 10:53:29.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.917013536s Jan 28 10:53:30.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.890709933s Jan 28 10:53:31.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.8733053s Jan 28 10:53:33.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.851649291s Jan 28 10:53:34.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.506773696s Jan 28 10:53:35.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 468.221994ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-n52wr Jan 28 10:53:36.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 10:53:37.179: INFO: stderr: "I0128 10:53:36.782283 172 log.go:172] (0xc000732160) (0xc00067c6e0) Create stream\nI0128 10:53:36.782718 172 log.go:172] (0xc000732160) (0xc00067c6e0) Stream added, broadcasting: 1\nI0128 10:53:36.841767 172 log.go:172] (0xc000732160) Reply frame received for 1\nI0128 10:53:36.842118 172 log.go:172] (0xc000732160) (0xc0003a2780) Create stream\nI0128 10:53:36.842140 172 log.go:172] (0xc000732160) (0xc0003a2780) Stream added, broadcasting: 3\nI0128 10:53:36.847112 172 log.go:172] (0xc000732160) Reply frame received for 3\nI0128 10:53:36.847163 172 log.go:172] (0xc000732160) (0xc0003a28c0) Create stream\nI0128 10:53:36.847173 172 log.go:172] (0xc000732160) (0xc0003a28c0) Stream added, broadcasting: 5\nI0128 10:53:36.849190 172 log.go:172] (0xc000732160) Reply frame received for 5\nI0128 10:53:37.004875 172 log.go:172] (0xc000732160) Data frame received for 3\nI0128 10:53:37.004980 172 log.go:172] (0xc0003a2780) (3) Data frame handling\nI0128 10:53:37.005010 172 log.go:172] (0xc0003a2780) (3) Data frame sent\nI0128 10:53:37.165869 172 log.go:172] (0xc000732160) Data frame received for 1\nI0128 10:53:37.165985 172 log.go:172] (0xc000732160) (0xc0003a2780) Stream removed, broadcasting: 3\nI0128 10:53:37.166122 172 log.go:172] (0xc00067c6e0) (1) Data frame handling\nI0128 10:53:37.166142 172 log.go:172] (0xc00067c6e0) (1) Data frame sent\nI0128 10:53:37.166155 172 log.go:172] (0xc000732160) (0xc00067c6e0) Stream removed, broadcasting: 1\nI0128 10:53:37.166852 172 log.go:172] (0xc000732160) (0xc0003a28c0) Stream removed, broadcasting: 5\nI0128 10:53:37.166890 172 log.go:172] (0xc000732160) Go away received\nI0128 10:53:37.167223 172 log.go:172] (0xc000732160) (0xc00067c6e0) Stream removed, broadcasting: 1\nI0128 10:53:37.167368 172 log.go:172] (0xc000732160) (0xc0003a2780) Stream removed, broadcasting: 3\nI0128 10:53:37.167381 172 log.go:172] (0xc000732160) (0xc0003a28c0) Stream removed, broadcasting: 5\n" Jan 28 10:53:37.179: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 28 10:53:37.179: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 28 10:53:37.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 10:53:37.945: INFO: stderr: "I0128 10:53:37.379025 194 log.go:172] (0xc0006e22c0) (0xc00065f360) Create stream\nI0128 10:53:37.379133 194 log.go:172] (0xc0006e22c0) (0xc00065f360) Stream added, broadcasting: 1\nI0128 10:53:37.383953 194 log.go:172] (0xc0006e22c0) Reply frame received for 1\nI0128 10:53:37.383989 194 log.go:172] (0xc0006e22c0) (0xc00065f400) Create stream\nI0128 10:53:37.383997 194 log.go:172] (0xc0006e22c0) (0xc00065f400) Stream added, broadcasting: 3\nI0128 10:53:37.385925 194 log.go:172] (0xc0006e22c0) Reply frame received for 3\nI0128 10:53:37.386001 194 log.go:172] (0xc0006e22c0) (0xc00065f4a0) Create stream\nI0128 10:53:37.386011 194 log.go:172] (0xc0006e22c0) (0xc00065f4a0) Stream added, broadcasting: 5\nI0128 10:53:37.387174 194 log.go:172] (0xc0006e22c0) Reply frame received for 5\nI0128 10:53:37.575372 194 log.go:172] (0xc0006e22c0) Data frame received for 3\nI0128 10:53:37.575520 194 log.go:172] (0xc00065f400) (3) Data frame handling\nI0128 10:53:37.575556 194 log.go:172] (0xc00065f400) (3) Data frame sent\nI0128 10:53:37.923284 194 log.go:172] (0xc0006e22c0) Data frame received for 1\nI0128 10:53:37.923456 194 log.go:172] (0xc00065f360) (1) Data frame handling\nI0128 10:53:37.923529 194 log.go:172] (0xc00065f360) (1) Data frame sent\nI0128 10:53:37.924247 194 log.go:172] (0xc0006e22c0) (0xc00065f360) Stream removed, broadcasting: 1\nI0128 10:53:37.925419 194 log.go:172] (0xc0006e22c0) (0xc00065f400) Stream removed, broadcasting: 3\nI0128 10:53:37.926009 194 log.go:172] (0xc0006e22c0) (0xc00065f4a0) Stream removed, broadcasting: 5\nI0128 10:53:37.926097 194 log.go:172] (0xc0006e22c0) (0xc00065f360) Stream removed, broadcasting: 1\nI0128 10:53:37.926134 194 log.go:172] (0xc0006e22c0) (0xc00065f400) Stream removed, broadcasting: 3\nI0128 10:53:37.926146 194 log.go:172] (0xc0006e22c0) (0xc00065f4a0) Stream removed, broadcasting: 5\n" Jan 28 10:53:37.946: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 28 10:53:37.946: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 28 10:53:37.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n52wr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 10:53:38.367: INFO: stderr: "I0128 10:53:38.147199 215 log.go:172] (0xc00015c630) (0xc0006ec640) Create stream\nI0128 10:53:38.147417 215 log.go:172] (0xc00015c630) (0xc0006ec640) Stream added, broadcasting: 1\nI0128 10:53:38.152307 215 log.go:172] (0xc00015c630) Reply frame received for 1\nI0128 10:53:38.152378 215 log.go:172] (0xc00015c630) (0xc0005f6d20) Create stream\nI0128 10:53:38.152393 215 log.go:172] (0xc00015c630) (0xc0005f6d20) Stream added, broadcasting: 3\nI0128 10:53:38.153943 215 log.go:172] (0xc00015c630) Reply frame received for 3\nI0128 10:53:38.153980 215 log.go:172] (0xc00015c630) (0xc00067a000) Create stream\nI0128 10:53:38.153988 215 log.go:172] (0xc00015c630) (0xc00067a000) Stream added, broadcasting: 5\nI0128 10:53:38.155847 215 log.go:172] (0xc00015c630) Reply frame received for 5\nI0128 10:53:38.257039 215 log.go:172] (0xc00015c630) Data frame received for 3\nI0128 10:53:38.257073 215 log.go:172] (0xc0005f6d20) (3) Data frame handling\nI0128 10:53:38.257094 215 log.go:172] (0xc0005f6d20) (3) Data frame sent\nI0128 10:53:38.356554 215 log.go:172] (0xc00015c630) Data frame received for 1\nI0128 10:53:38.356598 215 log.go:172] (0xc0006ec640) (1) Data frame handling\nI0128 10:53:38.356621 215 log.go:172] (0xc0006ec640) (1) Data frame sent\nI0128 10:53:38.356646 215 log.go:172] (0xc00015c630) (0xc0006ec640) Stream removed, broadcasting: 1\nI0128 10:53:38.356777 215 log.go:172] (0xc00015c630) (0xc0005f6d20) Stream removed, broadcasting: 3\nI0128 10:53:38.356873 215 log.go:172] (0xc00015c630) (0xc00067a000) Stream removed, broadcasting: 5\nI0128 10:53:38.356963 215 log.go:172] (0xc00015c630) Go away received\nI0128 10:53:38.357295 215 log.go:172] (0xc00015c630) (0xc0006ec640) Stream removed, broadcasting: 1\nI0128 10:53:38.357309 215 log.go:172] (0xc00015c630) (0xc0005f6d20) Stream removed, broadcasting: 3\nI0128 10:53:38.357319 215 log.go:172] (0xc00015c630) (0xc00067a000) Stream removed, broadcasting: 5\n" Jan 28 10:53:38.367: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 28 10:53:38.367: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 28 10:53:38.367: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 28 10:54:18.417: INFO: Deleting all statefulset in ns e2e-tests-statefulset-n52wr Jan 28 10:54:18.427: INFO: Scaling statefulset ss to 0 Jan 28 10:54:18.453: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 10:54:18.456: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:54:18.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-n52wr" for this suite. Jan 28 10:54:24.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:54:24.861: INFO: namespace: e2e-tests-statefulset-n52wr, resource: bindings, ignored listing per whitelist Jan 28 10:54:24.877: INFO: namespace e2e-tests-statefulset-n52wr deletion completed in 6.235267018s • [SLOW TEST:133.547 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:54:24.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jan 28 10:54:25.102: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 28 10:54:25.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:54:27.410: INFO: stderr: "" Jan 28 10:54:27.410: INFO: stdout: "service/redis-slave created\n" Jan 28 10:54:27.412: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 28 10:54:27.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:54:27.925: INFO: stderr: "" Jan 28 10:54:27.925: INFO: stdout: "service/redis-master created\n" Jan 28 10:54:27.926: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 28 10:54:27.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:54:28.365: INFO: stderr: "" Jan 28 10:54:28.365: INFO: stdout: "service/frontend created\n" Jan 28 10:54:28.366: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 28 10:54:28.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:54:28.887: INFO: stderr: "" Jan 28 10:54:28.887: INFO: stdout: "deployment.extensions/frontend created\n" Jan 28 10:54:28.889: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 28 10:54:28.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:54:29.448: INFO: stderr: "" Jan 28 10:54:29.448: INFO: stdout: "deployment.extensions/redis-master created\n" Jan 28 10:54:29.449: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 28 10:54:29.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:54:30.002: INFO: stderr: "" Jan 28 10:54:30.003: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jan 28 10:54:30.003: INFO: Waiting for all frontend pods to be Running. Jan 28 10:55:00.057: INFO: Waiting for frontend to serve content. Jan 28 10:55:00.169: INFO: Trying to add a new entry to the guestbook. Jan 28 10:55:00.204: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 28 10:55:00.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:55:00.700: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 10:55:00.700: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 28 10:55:00.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:55:01.068: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 10:55:01.068: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 28 10:55:01.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:55:01.428: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 10:55:01.429: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 28 10:55:01.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:55:01.583: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 10:55:01.583: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 28 10:55:01.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:55:01.818: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 10:55:01.819: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 28 10:55:01.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mv7t5' Jan 28 10:55:02.376: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 10:55:02.377: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:55:02.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mv7t5" for this suite. Jan 28 10:55:54.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:55:54.791: INFO: namespace: e2e-tests-kubectl-mv7t5, resource: bindings, ignored listing per whitelist Jan 28 10:55:54.957: INFO: namespace e2e-tests-kubectl-mv7t5 deletion completed in 52.51089598s • [SLOW TEST:90.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:55:54.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 28 10:55:55.093: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:56:19.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7td7k" for this suite. Jan 28 10:56:45.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:56:46.048: INFO: namespace: e2e-tests-init-container-7td7k, resource: bindings, ignored listing per whitelist Jan 28 10:56:46.077: INFO: namespace e2e-tests-init-container-7td7k deletion completed in 26.379604698s • [SLOW TEST:51.119 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:56:46.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 10:56:46.679: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e072f52a-41bc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000c37fd2), BlockOwnerDeletion:(*bool)(0xc000c37fd3)}} Jan 28 10:56:46.745: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e06aa76c-41bc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00117b69a), BlockOwnerDeletion:(*bool)(0xc00117b69b)}} Jan 28 10:56:46.806: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e06f0d6a-41bc-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000d15a42), BlockOwnerDeletion:(*bool)(0xc000d15a43)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:56:57.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ttrnp" for this suite. Jan 28 10:57:03.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:57:03.667: INFO: namespace: e2e-tests-gc-ttrnp, resource: bindings, ignored listing per whitelist Jan 28 10:57:03.893: INFO: namespace e2e-tests-gc-ttrnp deletion completed in 6.80224675s • [SLOW TEST:17.816 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:57:03.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-eb221911-41bc-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 28 10:57:04.306: INFO: Waiting up to 5m0s for pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-c8fw5" to be "success or failure" Jan 28 10:57:04.326: INFO: Pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.594913ms Jan 28 10:57:06.386: INFO: Pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080073018s Jan 28 10:57:08.398: INFO: Pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092604766s Jan 28 10:57:10.811: INFO: Pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.504949557s Jan 28 10:57:12.835: INFO: Pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528802833s Jan 28 10:57:14.945: INFO: Pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.638992117s STEP: Saw pod success Jan 28 10:57:14.945: INFO: Pod "pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 10:57:14.953: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 28 10:57:15.192: INFO: Waiting for pod pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005 to disappear Jan 28 10:57:15.203: INFO: Pod pod-secrets-eb23f717-41bc-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:57:15.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-c8fw5" for this suite. Jan 28 10:57:21.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:57:21.315: INFO: namespace: e2e-tests-secrets-c8fw5, resource: bindings, ignored listing per whitelist Jan 28 10:57:21.424: INFO: namespace e2e-tests-secrets-c8fw5 deletion completed in 6.212919043s • [SLOW TEST:17.530 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:57:21.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 28 10:57:21.591: INFO: PodSpec: initContainers in spec.initContainers Jan 28 10:58:35.867: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f5743127-41bc-11ea-a04a-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-zbw2n", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-zbw2n/pods/pod-init-f5743127-41bc-11ea-a04a-0242ac110005", UID:"f5761408-41bc-11ea-a994-fa163e34d433", ResourceVersion:"19734865", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715805841, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"591186643"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fts8s", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d7e100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fts8s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fts8s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fts8s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d4c088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d610e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d4c100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d4c120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d4c128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d4c12c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715805841, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715805841, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715805841, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715805841, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001d52040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c50770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c507e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://b286d0272de48cbfe0ac3b4fce6fcf354afdb13c1747ac1dc01734af0104a6b2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d52080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001d52060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:58:35.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-zbw2n" for this suite. Jan 28 10:59:00.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:59:00.098: INFO: namespace: e2e-tests-init-container-zbw2n, resource: bindings, ignored listing per whitelist Jan 28 10:59:00.184: INFO: namespace e2e-tests-init-container-zbw2n deletion completed in 24.167433081s • [SLOW TEST:98.759 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:59:00.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-305b8a07-41bd-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 10:59:00.503: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-zkkx4" to be "success or failure" Jan 28 10:59:00.576: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 72.793647ms Jan 28 10:59:02.639: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135503877s Jan 28 10:59:04.670: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166172174s Jan 28 10:59:07.076: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572132635s Jan 28 10:59:09.123: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619743765s Jan 28 10:59:11.139: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.635048952s Jan 28 10:59:13.298: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.794905732s STEP: Saw pod success Jan 28 10:59:13.299: INFO: Pod "pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 10:59:13.311: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 28 10:59:14.895: INFO: Waiting for pod pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005 to disappear Jan 28 10:59:14.921: INFO: Pod pod-projected-configmaps-305d1279-41bd-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 10:59:14.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zkkx4" for this suite. Jan 28 10:59:21.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 10:59:21.333: INFO: namespace: e2e-tests-projected-zkkx4, resource: bindings, ignored listing per whitelist Jan 28 10:59:21.361: INFO: namespace e2e-tests-projected-zkkx4 deletion completed in 6.391381672s • [SLOW TEST:21.177 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 10:59:21.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jan 28 10:59:21.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:22.116: INFO: stderr: "" Jan 28 10:59:22.116: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 10:59:22.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:22.430: INFO: stderr: "" Jan 28 10:59:22.430: INFO: stdout: "update-demo-nautilus-7wqtb update-demo-nautilus-8r4xw " Jan 28 10:59:22.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wqtb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:22.689: INFO: stderr: "" Jan 28 10:59:22.690: INFO: stdout: "" Jan 28 10:59:22.690: INFO: update-demo-nautilus-7wqtb is created but not running Jan 28 10:59:27.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:27.919: INFO: stderr: "" Jan 28 10:59:27.920: INFO: stdout: "update-demo-nautilus-7wqtb update-demo-nautilus-8r4xw " Jan 28 10:59:27.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wqtb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:28.121: INFO: stderr: "" Jan 28 10:59:28.121: INFO: stdout: "" Jan 28 10:59:28.121: INFO: update-demo-nautilus-7wqtb is created but not running Jan 28 10:59:33.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:33.259: INFO: stderr: "" Jan 28 10:59:33.260: INFO: stdout: "update-demo-nautilus-7wqtb update-demo-nautilus-8r4xw " Jan 28 10:59:33.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wqtb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:33.432: INFO: stderr: "" Jan 28 10:59:33.433: INFO: stdout: "" Jan 28 10:59:33.433: INFO: update-demo-nautilus-7wqtb is created but not running Jan 28 10:59:38.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:38.673: INFO: stderr: "" Jan 28 10:59:38.674: INFO: stdout: "update-demo-nautilus-7wqtb update-demo-nautilus-8r4xw " Jan 28 10:59:38.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wqtb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:38.790: INFO: stderr: "" Jan 28 10:59:38.790: INFO: stdout: "true" Jan 28 10:59:38.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wqtb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:38.948: INFO: stderr: "" Jan 28 10:59:38.948: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 10:59:38.948: INFO: validating pod update-demo-nautilus-7wqtb Jan 28 10:59:38.990: INFO: got data: { "image": "nautilus.jpg" } Jan 28 10:59:38.990: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 10:59:38.991: INFO: update-demo-nautilus-7wqtb is verified up and running Jan 28 10:59:38.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8r4xw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:39.131: INFO: stderr: "" Jan 28 10:59:39.131: INFO: stdout: "true" Jan 28 10:59:39.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8r4xw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 10:59:39.284: INFO: stderr: "" Jan 28 10:59:39.284: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 10:59:39.284: INFO: validating pod update-demo-nautilus-8r4xw Jan 28 10:59:39.359: INFO: got data: { "image": "nautilus.jpg" } Jan 28 10:59:39.359: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 10:59:39.359: INFO: update-demo-nautilus-8r4xw is verified up and running STEP: rolling-update to new replication controller Jan 28 10:59:39.364: INFO: scanned /root for discovery docs: Jan 28 10:59:39.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-hphfs' Jan 28 11:00:14.085: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 28 11:00:14.086: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 11:00:14.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hphfs' Jan 28 11:00:14.274: INFO: stderr: "" Jan 28 11:00:14.275: INFO: stdout: "update-demo-kitten-dd969 update-demo-kitten-phc5t " Jan 28 11:00:14.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dd969 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 11:00:14.397: INFO: stderr: "" Jan 28 11:00:14.398: INFO: stdout: "true" Jan 28 11:00:14.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dd969 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 11:00:14.561: INFO: stderr: "" Jan 28 11:00:14.562: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 28 11:00:14.562: INFO: validating pod update-demo-kitten-dd969 Jan 28 11:00:14.610: INFO: got data: { "image": "kitten.jpg" } Jan 28 11:00:14.611: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 28 11:00:14.611: INFO: update-demo-kitten-dd969 is verified up and running Jan 28 11:00:14.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-phc5t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 11:00:14.742: INFO: stderr: "" Jan 28 11:00:14.742: INFO: stdout: "true" Jan 28 11:00:14.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-phc5t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hphfs' Jan 28 11:00:14.896: INFO: stderr: "" Jan 28 11:00:14.896: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 28 11:00:14.896: INFO: validating pod update-demo-kitten-phc5t Jan 28 11:00:14.907: INFO: got data: { "image": "kitten.jpg" } Jan 28 11:00:14.907: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 28 11:00:14.907: INFO: update-demo-kitten-phc5t is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:00:14.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hphfs" for this suite. Jan 28 11:00:44.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:00:45.087: INFO: namespace: e2e-tests-kubectl-hphfs, resource: bindings, ignored listing per whitelist Jan 28 11:00:45.117: INFO: namespace e2e-tests-kubectl-hphfs deletion completed in 30.201593254s • [SLOW TEST:83.755 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:00:45.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 11:00:45.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-8xbsk' Jan 28 11:00:45.573: INFO: stderr: "" Jan 28 11:00:45.573: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 28 11:00:45.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-8xbsk' Jan 28 11:00:46.058: INFO: stderr: "" Jan 28 11:00:46.059: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:00:46.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8xbsk" for this suite. Jan 28 11:00:52.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:00:52.210: INFO: namespace: e2e-tests-kubectl-8xbsk, resource: bindings, ignored listing per whitelist Jan 28 11:00:52.253: INFO: namespace e2e-tests-kubectl-8xbsk deletion completed in 6.186020364s • [SLOW TEST:7.135 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:00:52.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 28 11:00:52.630: INFO: Waiting up to 5m0s for pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-b87j5" to be "success or failure" Jan 28 11:00:52.766: INFO: Pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 135.355452ms Jan 28 11:00:54.782: INFO: Pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151011296s Jan 28 11:00:56.796: INFO: Pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165511906s Jan 28 11:00:58.812: INFO: Pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181135878s Jan 28 11:01:00.831: INFO: Pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200206029s Jan 28 11:01:02.848: INFO: Pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217610322s STEP: Saw pod success Jan 28 11:01:02.848: INFO: Pod "pod-733a1d76-41bd-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:01:02.855: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-733a1d76-41bd-11ea-a04a-0242ac110005 container test-container: STEP: delete the pod Jan 28 11:01:03.134: INFO: Waiting for pod pod-733a1d76-41bd-11ea-a04a-0242ac110005 to disappear Jan 28 11:01:03.151: INFO: Pod pod-733a1d76-41bd-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:01:03.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-b87j5" for this suite. Jan 28 11:01:09.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:01:09.454: INFO: namespace: e2e-tests-emptydir-b87j5, resource: bindings, ignored listing per whitelist Jan 28 11:01:09.497: INFO: namespace e2e-tests-emptydir-b87j5 deletion completed in 6.330771043s • [SLOW TEST:17.244 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:01:09.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-wv4x5 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 28 11:01:09.719: INFO: Found 0 stateful pods, waiting for 3 Jan 28 11:01:19.775: INFO: Found 1 stateful pods, waiting for 3 Jan 28 11:01:29.738: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:01:29.738: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:01:29.738: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 28 11:01:39.738: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:01:39.738: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:01:39.738: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 28 11:01:39.809: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 28 11:01:49.922: INFO: Updating stateful set ss2 Jan 28 11:01:49.946: INFO: Waiting for Pod e2e-tests-statefulset-wv4x5/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 28 11:01:59.977: INFO: Waiting for Pod e2e-tests-statefulset-wv4x5/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 28 11:02:10.470: INFO: Found 2 stateful pods, waiting for 3 Jan 28 11:02:20.917: INFO: Found 2 stateful pods, waiting for 3 Jan 28 11:02:31.057: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:02:31.058: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:02:31.058: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 28 11:02:40.517: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:02:40.517: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:02:40.517: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 28 11:02:40.624: INFO: Updating stateful set ss2 Jan 28 11:02:40.727: INFO: Waiting for Pod e2e-tests-statefulset-wv4x5/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 28 11:02:50.923: INFO: Updating stateful set ss2 Jan 28 11:02:50.951: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv4x5/ss2 to complete update Jan 28 11:02:50.952: INFO: Waiting for Pod e2e-tests-statefulset-wv4x5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 28 11:03:00.976: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv4x5/ss2 to complete update Jan 28 11:03:00.977: INFO: Waiting for Pod e2e-tests-statefulset-wv4x5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 28 11:03:10.997: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv4x5/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 28 11:03:20.976: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wv4x5 Jan 28 11:03:20.982: INFO: Scaling statefulset ss2 to 0 Jan 28 11:03:51.030: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 11:03:51.039: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:03:51.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-wv4x5" for this suite. Jan 28 11:03:59.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:03:59.353: INFO: namespace: e2e-tests-statefulset-wv4x5, resource: bindings, ignored listing per whitelist Jan 28 11:03:59.356: INFO: namespace e2e-tests-statefulset-wv4x5 deletion completed in 8.220250229s • [SLOW TEST:169.859 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:03:59.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 28 11:03:59.567: INFO: Waiting up to 5m0s for pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-vztmj" to be "success or failure" Jan 28 11:03:59.681: INFO: Pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 113.832637ms Jan 28 11:04:01.702: INFO: Pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134942672s Jan 28 11:04:03.724: INFO: Pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156648927s Jan 28 11:04:05.997: INFO: Pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430472746s Jan 28 11:04:08.032: INFO: Pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.465323866s Jan 28 11:04:10.052: INFO: Pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.484834986s STEP: Saw pod success Jan 28 11:04:10.052: INFO: Pod "pod-e2a73c69-41bd-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:04:10.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e2a73c69-41bd-11ea-a04a-0242ac110005 container test-container: STEP: delete the pod Jan 28 11:04:10.520: INFO: Waiting for pod pod-e2a73c69-41bd-11ea-a04a-0242ac110005 to disappear Jan 28 11:04:10.545: INFO: Pod pod-e2a73c69-41bd-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:04:10.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vztmj" for this suite. Jan 28 11:04:16.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:04:16.785: INFO: namespace: e2e-tests-emptydir-vztmj, resource: bindings, ignored listing per whitelist Jan 28 11:04:16.800: INFO: namespace e2e-tests-emptydir-vztmj deletion completed in 6.236433133s • [SLOW TEST:17.444 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:04:16.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:04:27.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8w2cw" for this suite. Jan 28 11:05:15.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:05:15.611: INFO: namespace: e2e-tests-kubelet-test-8w2cw, resource: bindings, ignored listing per whitelist Jan 28 11:05:15.656: INFO: namespace e2e-tests-kubelet-test-8w2cw deletion completed in 48.261372049s • [SLOW TEST:58.855 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:05:15.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 11:05:15.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-gd8wh' Jan 28 11:05:17.962: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 11:05:17.962: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 28 11:05:17.991: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qfgb5] Jan 28 11:05:17.992: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qfgb5" in namespace "e2e-tests-kubectl-gd8wh" to be "running and ready" Jan 28 11:05:18.085: INFO: Pod "e2e-test-nginx-rc-qfgb5": Phase="Pending", Reason="", readiness=false. Elapsed: 92.901157ms Jan 28 11:05:20.107: INFO: Pod "e2e-test-nginx-rc-qfgb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11528311s Jan 28 11:05:22.118: INFO: Pod "e2e-test-nginx-rc-qfgb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126162716s Jan 28 11:05:24.342: INFO: Pod "e2e-test-nginx-rc-qfgb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.350000462s Jan 28 11:05:26.360: INFO: Pod "e2e-test-nginx-rc-qfgb5": Phase="Running", Reason="", readiness=true. Elapsed: 8.367937088s Jan 28 11:05:26.360: INFO: Pod "e2e-test-nginx-rc-qfgb5" satisfied condition "running and ready" Jan 28 11:05:26.360: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qfgb5] Jan 28 11:05:26.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gd8wh' Jan 28 11:05:26.770: INFO: stderr: "" Jan 28 11:05:26.771: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 28 11:05:26.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gd8wh' Jan 28 11:05:26.922: INFO: stderr: "" Jan 28 11:05:26.922: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:05:26.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gd8wh" for this suite. Jan 28 11:05:51.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:05:51.196: INFO: namespace: e2e-tests-kubectl-gd8wh, resource: bindings, ignored listing per whitelist Jan 28 11:05:51.233: INFO: namespace e2e-tests-kubectl-gd8wh deletion completed in 24.302921372s • [SLOW TEST:35.577 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:05:51.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:05:57.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-fngk2" for this suite. Jan 28 11:06:03.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:06:03.956: INFO: namespace: e2e-tests-namespaces-fngk2, resource: bindings, ignored listing per whitelist Jan 28 11:06:04.040: INFO: namespace e2e-tests-namespaces-fngk2 deletion completed in 6.230399087s STEP: Destroying namespace "e2e-tests-nsdeletetest-vsq9l" for this suite. Jan 28 11:06:04.046: INFO: Namespace e2e-tests-nsdeletetest-vsq9l was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-ksgbn" for this suite. Jan 28 11:06:10.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:06:10.214: INFO: namespace: e2e-tests-nsdeletetest-ksgbn, resource: bindings, ignored listing per whitelist Jan 28 11:06:10.297: INFO: namespace e2e-tests-nsdeletetest-ksgbn deletion completed in 6.250960828s • [SLOW TEST:19.064 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:06:10.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:06:10.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jb5bn" for this suite. Jan 28 11:06:34.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:06:34.780: INFO: namespace: e2e-tests-pods-jb5bn, resource: bindings, ignored listing per whitelist Jan 28 11:06:34.876: INFO: namespace e2e-tests-pods-jb5bn deletion completed in 24.175707796s • [SLOW TEST:24.578 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:06:34.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 28 11:06:48.241: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:06:48.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-5f6mm" for this suite. Jan 28 11:07:14.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:07:14.620: INFO: namespace: e2e-tests-replicaset-5f6mm, resource: bindings, ignored listing per whitelist Jan 28 11:07:14.791: INFO: namespace e2e-tests-replicaset-5f6mm deletion completed in 26.336464705s • [SLOW TEST:39.915 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:07:14.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:07:25.671: INFO: Waiting up to 5m0s for pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005" in namespace "e2e-tests-pods-fv5g5" to be "success or failure" Jan 28 11:07:25.687: INFO: Pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.940586ms Jan 28 11:07:27.695: INFO: Pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023255512s Jan 28 11:07:29.720: INFO: Pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047964904s Jan 28 11:07:31.886: INFO: Pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213887859s Jan 28 11:07:33.913: INFO: Pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241593356s Jan 28 11:07:36.312: INFO: Pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.64056643s STEP: Saw pod success Jan 28 11:07:36.313: INFO: Pod "client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:07:36.577: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005 container env3cont: STEP: delete the pod Jan 28 11:07:36.792: INFO: Waiting for pod client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005 to disappear Jan 28 11:07:36.806: INFO: Pod client-envvars-5d6dc649-41be-11ea-a04a-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:07:36.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fv5g5" for this suite. Jan 28 11:08:26.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:08:27.241: INFO: namespace: e2e-tests-pods-fv5g5, resource: bindings, ignored listing per whitelist Jan 28 11:08:27.247: INFO: namespace e2e-tests-pods-fv5g5 deletion completed in 50.422801559s • [SLOW TEST:72.455 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:08:27.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 28 11:08:27.539: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rtbwk,SelfLink:/api/v1/namespaces/e2e-tests-watch-rtbwk/configmaps/e2e-watch-test-resource-version,UID:824e1284-41be-11ea-a994-fa163e34d433,ResourceVersion:19736253,Generation:0,CreationTimestamp:2020-01-28 11:08:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 28 11:08:27.539: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rtbwk,SelfLink:/api/v1/namespaces/e2e-tests-watch-rtbwk/configmaps/e2e-watch-test-resource-version,UID:824e1284-41be-11ea-a994-fa163e34d433,ResourceVersion:19736254,Generation:0,CreationTimestamp:2020-01-28 11:08:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:08:27.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-rtbwk" for this suite. Jan 28 11:08:33.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:08:33.746: INFO: namespace: e2e-tests-watch-rtbwk, resource: bindings, ignored listing per whitelist Jan 28 11:08:33.794: INFO: namespace e2e-tests-watch-rtbwk deletion completed in 6.247781524s • [SLOW TEST:6.547 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:08:33.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 28 11:08:34.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-hftws" to be "success or failure" Jan 28 11:08:34.079: INFO: Pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.364994ms Jan 28 11:08:36.284: INFO: Pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217999793s Jan 28 11:08:38.304: INFO: Pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238579974s Jan 28 11:08:40.333: INFO: Pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267582669s Jan 28 11:08:42.384: INFO: Pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318634548s Jan 28 11:08:44.488: INFO: Pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.421968773s STEP: Saw pod success Jan 28 11:08:44.489: INFO: Pod "downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:08:44.508: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005 container client-container: STEP: delete the pod Jan 28 11:08:44.795: INFO: Waiting for pod downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005 to disappear Jan 28 11:08:44.819: INFO: Pod downwardapi-volume-8645785d-41be-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:08:44.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hftws" for this suite. Jan 28 11:08:50.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:08:50.963: INFO: namespace: e2e-tests-projected-hftws, resource: bindings, ignored listing per whitelist Jan 28 11:08:51.012: INFO: namespace e2e-tests-projected-hftws deletion completed in 6.180599179s • [SLOW TEST:17.216 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:08:51.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 28 11:08:51.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-pvdk4" to be "success or failure" Jan 28 11:08:51.277: INFO: Pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638849ms Jan 28 11:08:53.286: INFO: Pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016092445s Jan 28 11:08:55.310: INFO: Pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039783732s Jan 28 11:08:57.578: INFO: Pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308136335s Jan 28 11:08:59.592: INFO: Pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321840608s Jan 28 11:09:01.634: INFO: Pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.363876347s STEP: Saw pod success Jan 28 11:09:01.634: INFO: Pod "downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:09:01.939: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005 container client-container: STEP: delete the pod Jan 28 11:09:02.222: INFO: Waiting for pod downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005 to disappear Jan 28 11:09:02.233: INFO: Pod downwardapi-volume-9083dba4-41be-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:09:02.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pvdk4" for this suite. Jan 28 11:09:08.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:09:08.351: INFO: namespace: e2e-tests-downward-api-pvdk4, resource: bindings, ignored listing per whitelist Jan 28 11:09:08.526: INFO: namespace e2e-tests-downward-api-pvdk4 deletion completed in 6.279799717s • [SLOW TEST:17.514 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:09:08.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 28 11:09:08.753: INFO: Waiting up to 5m0s for pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-mnr5b" to be "success or failure" Jan 28 11:09:08.768: INFO: Pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.235892ms Jan 28 11:09:10.777: INFO: Pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024052348s Jan 28 11:09:12.844: INFO: Pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090223658s Jan 28 11:09:14.888: INFO: Pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134353408s Jan 28 11:09:16.900: INFO: Pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14660952s Jan 28 11:09:18.921: INFO: Pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.167817164s STEP: Saw pod success Jan 28 11:09:18.921: INFO: Pod "downward-api-9af1397f-41be-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:09:18.928: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-9af1397f-41be-11ea-a04a-0242ac110005 container dapi-container: STEP: delete the pod Jan 28 11:09:19.130: INFO: Waiting for pod downward-api-9af1397f-41be-11ea-a04a-0242ac110005 to disappear Jan 28 11:09:19.162: INFO: Pod downward-api-9af1397f-41be-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:09:19.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mnr5b" for this suite. Jan 28 11:09:25.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:09:25.379: INFO: namespace: e2e-tests-downward-api-mnr5b, resource: bindings, ignored listing per whitelist Jan 28 11:09:25.435: INFO: namespace e2e-tests-downward-api-mnr5b deletion completed in 6.234689142s • [SLOW TEST:16.908 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:09:25.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-t4x2v/configmap-test-a4fa2aa7-41be-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 11:09:25.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-t4x2v" to be "success or failure" Jan 28 11:09:25.664: INFO: Pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.951033ms Jan 28 11:09:27.685: INFO: Pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067389609s Jan 28 11:09:29.714: INFO: Pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096506469s Jan 28 11:09:31.733: INFO: Pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115612118s Jan 28 11:09:33.749: INFO: Pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130966457s Jan 28 11:09:35.763: INFO: Pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145553575s STEP: Saw pod success Jan 28 11:09:35.763: INFO: Pod "pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:09:35.771: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005 container env-test: STEP: delete the pod Jan 28 11:09:36.419: INFO: Waiting for pod pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005 to disappear Jan 28 11:09:36.428: INFO: Pod pod-configmaps-a4facdfa-41be-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:09:36.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-t4x2v" for this suite. Jan 28 11:09:42.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:09:42.770: INFO: namespace: e2e-tests-configmap-t4x2v, resource: bindings, ignored listing per whitelist Jan 28 11:09:42.788: INFO: namespace e2e-tests-configmap-t4x2v deletion completed in 6.351011723s • [SLOW TEST:17.352 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:09:42.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 28 11:09:43.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-qx4wv" to be "success or failure" Jan 28 11:09:43.187: INFO: Pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.30731ms Jan 28 11:09:45.201: INFO: Pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054285426s Jan 28 11:09:47.223: INFO: Pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076994164s Jan 28 11:09:49.241: INFO: Pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09472195s Jan 28 11:09:51.268: INFO: Pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12179238s Jan 28 11:09:53.287: INFO: Pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14047942s STEP: Saw pod success Jan 28 11:09:53.287: INFO: Pod "downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:09:53.294: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005 container client-container: STEP: delete the pod Jan 28 11:09:53.443: INFO: Waiting for pod downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005 to disappear Jan 28 11:09:53.459: INFO: Pod downwardapi-volume-af70284c-41be-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:09:53.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qx4wv" for this suite. Jan 28 11:09:59.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:09:59.859: INFO: namespace: e2e-tests-projected-qx4wv, resource: bindings, ignored listing per whitelist Jan 28 11:09:59.869: INFO: namespace e2e-tests-projected-qx4wv deletion completed in 6.397699639s • [SLOW TEST:17.081 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:09:59.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-b9a2a93e-41be-11ea-a04a-0242ac110005 STEP: Creating secret with name s-test-opt-upd-b9a2aa8b-41be-11ea-a04a-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b9a2a93e-41be-11ea-a04a-0242ac110005 STEP: Updating secret s-test-opt-upd-b9a2aa8b-41be-11ea-a04a-0242ac110005 STEP: Creating secret with name s-test-opt-create-b9a2aacf-41be-11ea-a04a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:10:19.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-99qlw" for this suite. Jan 28 11:10:43.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:10:43.269: INFO: namespace: e2e-tests-secrets-99qlw, resource: bindings, ignored listing per whitelist Jan 28 11:10:43.392: INFO: namespace e2e-tests-secrets-99qlw deletion completed in 24.216560404s • [SLOW TEST:43.523 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:10:43.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 28 11:10:43.672: INFO: Waiting up to 5m0s for pod "pod-d37165c3-41be-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-k29vg" to be "success or failure" Jan 28 11:10:43.700: INFO: Pod "pod-d37165c3-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.992577ms Jan 28 11:10:46.308: INFO: Pod "pod-d37165c3-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.635763832s Jan 28 11:10:48.335: INFO: Pod "pod-d37165c3-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663030297s Jan 28 11:10:50.673: INFO: Pod "pod-d37165c3-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.000587712s Jan 28 11:10:52.733: INFO: Pod "pod-d37165c3-41be-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060542624s Jan 28 11:10:54.755: INFO: Pod "pod-d37165c3-41be-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.082892833s STEP: Saw pod success Jan 28 11:10:54.755: INFO: Pod "pod-d37165c3-41be-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:10:54.760: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d37165c3-41be-11ea-a04a-0242ac110005 container test-container: STEP: delete the pod Jan 28 11:10:55.066: INFO: Waiting for pod pod-d37165c3-41be-11ea-a04a-0242ac110005 to disappear Jan 28 11:10:55.079: INFO: Pod pod-d37165c3-41be-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:10:55.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k29vg" for this suite. Jan 28 11:11:03.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:11:03.368: INFO: namespace: e2e-tests-emptydir-k29vg, resource: bindings, ignored listing per whitelist Jan 28 11:11:03.464: INFO: namespace e2e-tests-emptydir-k29vg deletion completed in 8.37754711s • [SLOW TEST:20.072 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:11:03.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-75sb STEP: Creating a pod to test atomic-volume-subpath Jan 28 11:11:03.729: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-75sb" in namespace "e2e-tests-subpath-fmvl5" to be "success or failure" Jan 28 11:11:03.755: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.249957ms Jan 28 11:11:06.096: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366230196s Jan 28 11:11:08.119: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.389641172s Jan 28 11:11:10.131: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401152621s Jan 28 11:11:12.170: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.440662049s Jan 28 11:11:14.188: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.45884752s Jan 28 11:11:16.235: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.505705871s Jan 28 11:11:18.248: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.518733405s Jan 28 11:11:20.263: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.53375643s Jan 28 11:11:22.278: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 18.548820491s Jan 28 11:11:24.290: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 20.560027507s Jan 28 11:11:26.325: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 22.595649742s Jan 28 11:11:28.344: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 24.614058677s Jan 28 11:11:30.367: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 26.637068838s Jan 28 11:11:32.378: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 28.648809284s Jan 28 11:11:34.415: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 30.685316147s Jan 28 11:11:36.432: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 32.702176536s Jan 28 11:11:38.460: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Running", Reason="", readiness=false. Elapsed: 34.730136191s Jan 28 11:11:40.790: INFO: Pod "pod-subpath-test-downwardapi-75sb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.060822487s STEP: Saw pod success Jan 28 11:11:40.791: INFO: Pod "pod-subpath-test-downwardapi-75sb" satisfied condition "success or failure" Jan 28 11:11:40.800: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-75sb container test-container-subpath-downwardapi-75sb: STEP: delete the pod Jan 28 11:11:41.143: INFO: Waiting for pod pod-subpath-test-downwardapi-75sb to disappear Jan 28 11:11:41.172: INFO: Pod pod-subpath-test-downwardapi-75sb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-75sb Jan 28 11:11:41.172: INFO: Deleting pod "pod-subpath-test-downwardapi-75sb" in namespace "e2e-tests-subpath-fmvl5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:11:41.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-fmvl5" for this suite. Jan 28 11:11:47.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:11:47.302: INFO: namespace: e2e-tests-subpath-fmvl5, resource: bindings, ignored listing per whitelist Jan 28 11:11:47.454: INFO: namespace e2e-tests-subpath-fmvl5 deletion completed in 6.268241958s • [SLOW TEST:43.990 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:11:47.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dw6fg STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 28 11:11:47.686: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 28 11:12:18.100: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-dw6fg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:12:18.100: INFO: >>> kubeConfig: /root/.kube/config I0128 11:12:18.210328 8 log.go:172] (0xc0000eb6b0) (0xc0016230e0) Create stream I0128 11:12:18.210575 8 log.go:172] (0xc0000eb6b0) (0xc0016230e0) Stream added, broadcasting: 1 I0128 11:12:18.219179 8 log.go:172] (0xc0000eb6b0) Reply frame received for 1 I0128 11:12:18.219310 8 log.go:172] (0xc0000eb6b0) (0xc001a979a0) Create stream I0128 11:12:18.219330 8 log.go:172] (0xc0000eb6b0) (0xc001a979a0) Stream added, broadcasting: 3 I0128 11:12:18.221972 8 log.go:172] (0xc0000eb6b0) Reply frame received for 3 I0128 11:12:18.222002 8 log.go:172] (0xc0000eb6b0) (0xc001623180) Create stream I0128 11:12:18.222013 8 log.go:172] (0xc0000eb6b0) (0xc001623180) Stream added, broadcasting: 5 I0128 11:12:18.223714 8 log.go:172] (0xc0000eb6b0) Reply frame received for 5 I0128 11:12:18.483341 8 log.go:172] (0xc0000eb6b0) Data frame received for 3 I0128 11:12:18.483535 8 log.go:172] (0xc001a979a0) (3) Data frame handling I0128 11:12:18.483603 8 log.go:172] (0xc001a979a0) (3) Data frame sent I0128 11:12:18.702478 8 log.go:172] (0xc0000eb6b0) (0xc001a979a0) Stream removed, broadcasting: 3 I0128 11:12:18.702812 8 log.go:172] (0xc0000eb6b0) (0xc001623180) Stream removed, broadcasting: 5 I0128 11:12:18.702992 8 log.go:172] (0xc0000eb6b0) Data frame received for 1 I0128 11:12:18.703226 8 log.go:172] (0xc0016230e0) (1) Data frame handling I0128 11:12:18.703270 8 log.go:172] (0xc0016230e0) (1) Data frame sent I0128 11:12:18.703315 8 log.go:172] (0xc0000eb6b0) (0xc0016230e0) Stream removed, broadcasting: 1 I0128 11:12:18.703378 8 log.go:172] (0xc0000eb6b0) Go away received I0128 11:12:18.703851 8 log.go:172] (0xc0000eb6b0) (0xc0016230e0) Stream removed, broadcasting: 1 I0128 11:12:18.703880 8 log.go:172] (0xc0000eb6b0) (0xc001a979a0) Stream removed, broadcasting: 3 I0128 11:12:18.703897 8 log.go:172] (0xc0000eb6b0) (0xc001623180) Stream removed, broadcasting: 5 Jan 28 11:12:18.704: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:12:18.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dw6fg" for this suite. Jan 28 11:12:44.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:12:45.056: INFO: namespace: e2e-tests-pod-network-test-dw6fg, resource: bindings, ignored listing per whitelist Jan 28 11:12:45.149: INFO: namespace e2e-tests-pod-network-test-dw6fg deletion completed in 26.424158272s • [SLOW TEST:57.695 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:12:45.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 28 11:12:45.554: INFO: Waiting up to 5m0s for pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-lln9p" to be "success or failure" Jan 28 11:12:45.568: INFO: Pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.57764ms Jan 28 11:12:47.584: INFO: Pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029988654s Jan 28 11:12:49.597: INFO: Pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04263992s Jan 28 11:12:51.613: INFO: Pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058788434s Jan 28 11:12:53.849: INFO: Pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294384946s Jan 28 11:12:56.005: INFO: Pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.450846009s STEP: Saw pod success Jan 28 11:12:56.005: INFO: Pod "pod-1c1d0550-41bf-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:12:56.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1c1d0550-41bf-11ea-a04a-0242ac110005 container test-container: STEP: delete the pod Jan 28 11:12:56.283: INFO: Waiting for pod pod-1c1d0550-41bf-11ea-a04a-0242ac110005 to disappear Jan 28 11:12:56.296: INFO: Pod pod-1c1d0550-41bf-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:12:56.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lln9p" for this suite. Jan 28 11:13:02.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:13:02.979: INFO: namespace: e2e-tests-emptydir-lln9p, resource: bindings, ignored listing per whitelist Jan 28 11:13:03.022: INFO: namespace e2e-tests-emptydir-lln9p deletion completed in 6.586986499s • [SLOW TEST:17.872 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:13:03.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 11:13:03.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tx7bg' Jan 28 11:13:03.447: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 11:13:03.447: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 28 11:13:03.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-tx7bg' Jan 28 11:13:03.977: INFO: stderr: "" Jan 28 11:13:03.978: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:13:03.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tx7bg" for this suite. Jan 28 11:13:28.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:13:28.226: INFO: namespace: e2e-tests-kubectl-tx7bg, resource: bindings, ignored listing per whitelist Jan 28 11:13:28.332: INFO: namespace e2e-tests-kubectl-tx7bg deletion completed in 24.221944791s • [SLOW TEST:25.310 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:13:28.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:13:28.711: INFO: Creating ReplicaSet my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005 Jan 28 11:13:28.779: INFO: Pod name my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005: Found 0 pods out of 1 Jan 28 11:13:33.802: INFO: Pod name my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005: Found 1 pods out of 1 Jan 28 11:13:33.802: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005" is running Jan 28 11:13:39.869: INFO: Pod "my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005-kmwfq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 11:13:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 11:13:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 11:13:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 11:13:28 +0000 UTC Reason: Message:}]) Jan 28 11:13:39.869: INFO: Trying to dial the pod Jan 28 11:13:44.917: INFO: Controller my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005: Got expected result from replica 1 [my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005-kmwfq]: "my-hostname-basic-35e6fbca-41bf-11ea-a04a-0242ac110005-kmwfq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:13:44.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-x4m4s" for this suite. Jan 28 11:13:53.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:13:54.387: INFO: namespace: e2e-tests-replicaset-x4m4s, resource: bindings, ignored listing per whitelist Jan 28 11:13:54.402: INFO: namespace e2e-tests-replicaset-x4m4s deletion completed in 9.478486665s • [SLOW TEST:26.070 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:13:54.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-dlnfw STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-dlnfw STEP: Deleting pre-stop pod Jan 28 11:14:17.989: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:14:18.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-dlnfw" for this suite. Jan 28 11:14:58.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:14:58.153: INFO: namespace: e2e-tests-prestop-dlnfw, resource: bindings, ignored listing per whitelist Jan 28 11:14:58.309: INFO: namespace e2e-tests-prestop-dlnfw deletion completed in 40.235060715s • [SLOW TEST:63.906 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:14:58.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-4txjj/configmap-test-6b89b69b-41bf-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 11:14:58.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-4txjj" to be "success or failure" Jan 28 11:14:58.757: INFO: Pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.21134ms Jan 28 11:15:00.769: INFO: Pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043510511s Jan 28 11:15:02.783: INFO: Pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058134976s Jan 28 11:15:04.978: INFO: Pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252703294s Jan 28 11:15:06.992: INFO: Pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266622953s Jan 28 11:15:09.010: INFO: Pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.284914809s STEP: Saw pod success Jan 28 11:15:09.010: INFO: Pod "pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:15:09.017: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005 container env-test: STEP: delete the pod Jan 28 11:15:09.089: INFO: Waiting for pod pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005 to disappear Jan 28 11:15:09.118: INFO: Pod pod-configmaps-6b8b1a5d-41bf-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:15:09.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4txjj" for this suite. Jan 28 11:15:15.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:15:15.282: INFO: namespace: e2e-tests-configmap-4txjj, resource: bindings, ignored listing per whitelist Jan 28 11:15:15.422: INFO: namespace e2e-tests-configmap-4txjj deletion completed in 6.239871108s • [SLOW TEST:17.112 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:15:15.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 28 11:15:15.681: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 28 11:15:20.699: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:15:22.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-7kd5n" for this suite. Jan 28 11:15:34.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:15:34.828: INFO: namespace: e2e-tests-replication-controller-7kd5n, resource: bindings, ignored listing per whitelist Jan 28 11:15:34.843: INFO: namespace e2e-tests-replication-controller-7kd5n deletion completed in 12.7433765s • [SLOW TEST:19.421 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:15:34.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-5mfrt/secret-test-812ca6cc-41bf-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 28 11:15:35.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-5mfrt" to be "success or failure" Jan 28 11:15:35.300: INFO: Pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 181.639144ms Jan 28 11:15:37.313: INFO: Pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194237729s Jan 28 11:15:39.325: INFO: Pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205804901s Jan 28 11:15:41.338: INFO: Pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219010109s Jan 28 11:15:43.355: INFO: Pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.236503184s Jan 28 11:15:46.508: INFO: Pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.389055589s STEP: Saw pod success Jan 28 11:15:46.509: INFO: Pod "pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:15:46.537: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005 container env-test: STEP: delete the pod Jan 28 11:15:47.003: INFO: Waiting for pod pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005 to disappear Jan 28 11:15:47.050: INFO: Pod pod-configmaps-81391e41-41bf-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:15:47.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5mfrt" for this suite. Jan 28 11:15:53.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:15:53.395: INFO: namespace: e2e-tests-secrets-5mfrt, resource: bindings, ignored listing per whitelist Jan 28 11:15:53.420: INFO: namespace e2e-tests-secrets-5mfrt deletion completed in 6.358516336s • [SLOW TEST:18.577 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:15:53.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 28 11:16:04.284: INFO: Successfully updated pod "pod-update-8c4f9fa7-41bf-11ea-a04a-0242ac110005" STEP: verifying the updated pod is in kubernetes Jan 28 11:16:04.311: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:16:04.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8zq7z" for this suite. Jan 28 11:16:28.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:16:28.618: INFO: namespace: e2e-tests-pods-8zq7z, resource: bindings, ignored listing per whitelist Jan 28 11:16:28.701: INFO: namespace e2e-tests-pods-8zq7z deletion completed in 24.385002242s • [SLOW TEST:35.280 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:16:28.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 28 11:16:51.051: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:51.051: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:51.125522 8 log.go:172] (0xc001e6a2c0) (0xc001baf4a0) Create stream I0128 11:16:51.125702 8 log.go:172] (0xc001e6a2c0) (0xc001baf4a0) Stream added, broadcasting: 1 I0128 11:16:51.146742 8 log.go:172] (0xc001e6a2c0) Reply frame received for 1 I0128 11:16:51.146854 8 log.go:172] (0xc001e6a2c0) (0xc0011b3900) Create stream I0128 11:16:51.146877 8 log.go:172] (0xc001e6a2c0) (0xc0011b3900) Stream added, broadcasting: 3 I0128 11:16:51.148151 8 log.go:172] (0xc001e6a2c0) Reply frame received for 3 I0128 11:16:51.148183 8 log.go:172] (0xc001e6a2c0) (0xc001baf540) Create stream I0128 11:16:51.148201 8 log.go:172] (0xc001e6a2c0) (0xc001baf540) Stream added, broadcasting: 5 I0128 11:16:51.150109 8 log.go:172] (0xc001e6a2c0) Reply frame received for 5 I0128 11:16:51.394149 8 log.go:172] (0xc001e6a2c0) Data frame received for 3 I0128 11:16:51.394330 8 log.go:172] (0xc0011b3900) (3) Data frame handling I0128 11:16:51.394368 8 log.go:172] (0xc0011b3900) (3) Data frame sent I0128 11:16:51.528956 8 log.go:172] (0xc001e6a2c0) Data frame received for 1 I0128 11:16:51.529122 8 log.go:172] (0xc001e6a2c0) (0xc001baf540) Stream removed, broadcasting: 5 I0128 11:16:51.529186 8 log.go:172] (0xc001baf4a0) (1) Data frame handling I0128 11:16:51.529206 8 log.go:172] (0xc001baf4a0) (1) Data frame sent I0128 11:16:51.529249 8 log.go:172] (0xc001e6a2c0) (0xc0011b3900) Stream removed, broadcasting: 3 I0128 11:16:51.529298 8 log.go:172] (0xc001e6a2c0) (0xc001baf4a0) Stream removed, broadcasting: 1 I0128 11:16:51.529315 8 log.go:172] (0xc001e6a2c0) Go away received I0128 11:16:51.530090 8 log.go:172] (0xc001e6a2c0) (0xc001baf4a0) Stream removed, broadcasting: 1 I0128 11:16:51.530130 8 log.go:172] (0xc001e6a2c0) (0xc0011b3900) Stream removed, broadcasting: 3 I0128 11:16:51.530149 8 log.go:172] (0xc001e6a2c0) (0xc001baf540) Stream removed, broadcasting: 5 Jan 28 11:16:51.530: INFO: Exec stderr: "" Jan 28 11:16:51.530: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:51.530: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:51.626532 8 log.go:172] (0xc001d682c0) (0xc0011b3b80) Create stream I0128 11:16:51.626812 8 log.go:172] (0xc001d682c0) (0xc0011b3b80) Stream added, broadcasting: 1 I0128 11:16:51.648508 8 log.go:172] (0xc001d682c0) Reply frame received for 1 I0128 11:16:51.648828 8 log.go:172] (0xc001d682c0) (0xc001f48000) Create stream I0128 11:16:51.648856 8 log.go:172] (0xc001d682c0) (0xc001f48000) Stream added, broadcasting: 3 I0128 11:16:51.651771 8 log.go:172] (0xc001d682c0) Reply frame received for 3 I0128 11:16:51.651899 8 log.go:172] (0xc001d682c0) (0xc002130000) Create stream I0128 11:16:51.651916 8 log.go:172] (0xc001d682c0) (0xc002130000) Stream added, broadcasting: 5 I0128 11:16:51.653646 8 log.go:172] (0xc001d682c0) Reply frame received for 5 I0128 11:16:51.826534 8 log.go:172] (0xc001d682c0) Data frame received for 3 I0128 11:16:51.826739 8 log.go:172] (0xc001f48000) (3) Data frame handling I0128 11:16:51.826805 8 log.go:172] (0xc001f48000) (3) Data frame sent I0128 11:16:51.972467 8 log.go:172] (0xc001d682c0) (0xc001f48000) Stream removed, broadcasting: 3 I0128 11:16:51.972681 8 log.go:172] (0xc001d682c0) Data frame received for 1 I0128 11:16:51.972728 8 log.go:172] (0xc001d682c0) (0xc002130000) Stream removed, broadcasting: 5 I0128 11:16:51.972806 8 log.go:172] (0xc0011b3b80) (1) Data frame handling I0128 11:16:51.972839 8 log.go:172] (0xc0011b3b80) (1) Data frame sent I0128 11:16:51.972847 8 log.go:172] (0xc001d682c0) (0xc0011b3b80) Stream removed, broadcasting: 1 I0128 11:16:51.972874 8 log.go:172] (0xc001d682c0) Go away received I0128 11:16:51.973275 8 log.go:172] (0xc001d682c0) (0xc0011b3b80) Stream removed, broadcasting: 1 I0128 11:16:51.973310 8 log.go:172] (0xc001d682c0) (0xc001f48000) Stream removed, broadcasting: 3 I0128 11:16:51.973332 8 log.go:172] (0xc001d682c0) (0xc002130000) Stream removed, broadcasting: 5 Jan 28 11:16:51.973: INFO: Exec stderr: "" Jan 28 11:16:51.973: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:51.973: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:52.052493 8 log.go:172] (0xc001d68370) (0xc00203a1e0) Create stream I0128 11:16:52.052831 8 log.go:172] (0xc001d68370) (0xc00203a1e0) Stream added, broadcasting: 1 I0128 11:16:52.059752 8 log.go:172] (0xc001d68370) Reply frame received for 1 I0128 11:16:52.059816 8 log.go:172] (0xc001d68370) (0xc00203a280) Create stream I0128 11:16:52.059828 8 log.go:172] (0xc001d68370) (0xc00203a280) Stream added, broadcasting: 3 I0128 11:16:52.061119 8 log.go:172] (0xc001d68370) Reply frame received for 3 I0128 11:16:52.061156 8 log.go:172] (0xc001d68370) (0xc000a94000) Create stream I0128 11:16:52.061206 8 log.go:172] (0xc001d68370) (0xc000a94000) Stream added, broadcasting: 5 I0128 11:16:52.062225 8 log.go:172] (0xc001d68370) Reply frame received for 5 I0128 11:16:52.189253 8 log.go:172] (0xc001d68370) Data frame received for 3 I0128 11:16:52.189535 8 log.go:172] (0xc00203a280) (3) Data frame handling I0128 11:16:52.189593 8 log.go:172] (0xc00203a280) (3) Data frame sent I0128 11:16:52.353571 8 log.go:172] (0xc001d68370) Data frame received for 1 I0128 11:16:52.353904 8 log.go:172] (0xc001d68370) (0xc000a94000) Stream removed, broadcasting: 5 I0128 11:16:52.354036 8 log.go:172] (0xc00203a1e0) (1) Data frame handling I0128 11:16:52.354094 8 log.go:172] (0xc00203a1e0) (1) Data frame sent I0128 11:16:52.354123 8 log.go:172] (0xc001d68370) (0xc00203a280) Stream removed, broadcasting: 3 I0128 11:16:52.354153 8 log.go:172] (0xc001d68370) (0xc00203a1e0) Stream removed, broadcasting: 1 I0128 11:16:52.354175 8 log.go:172] (0xc001d68370) Go away received I0128 11:16:52.354673 8 log.go:172] (0xc001d68370) (0xc00203a1e0) Stream removed, broadcasting: 1 I0128 11:16:52.354722 8 log.go:172] (0xc001d68370) (0xc00203a280) Stream removed, broadcasting: 3 I0128 11:16:52.354808 8 log.go:172] (0xc001d68370) (0xc000a94000) Stream removed, broadcasting: 5 Jan 28 11:16:52.354: INFO: Exec stderr: "" Jan 28 11:16:52.355: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:52.355: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:52.427373 8 log.go:172] (0xc001d68840) (0xc00203a500) Create stream I0128 11:16:52.427566 8 log.go:172] (0xc001d68840) (0xc00203a500) Stream added, broadcasting: 1 I0128 11:16:52.437674 8 log.go:172] (0xc001d68840) Reply frame received for 1 I0128 11:16:52.437759 8 log.go:172] (0xc001d68840) (0xc001bea000) Create stream I0128 11:16:52.437776 8 log.go:172] (0xc001d68840) (0xc001bea000) Stream added, broadcasting: 3 I0128 11:16:52.438791 8 log.go:172] (0xc001d68840) Reply frame received for 3 I0128 11:16:52.438812 8 log.go:172] (0xc001d68840) (0xc00203a5a0) Create stream I0128 11:16:52.438821 8 log.go:172] (0xc001d68840) (0xc00203a5a0) Stream added, broadcasting: 5 I0128 11:16:52.439776 8 log.go:172] (0xc001d68840) Reply frame received for 5 I0128 11:16:52.615975 8 log.go:172] (0xc001d68840) Data frame received for 3 I0128 11:16:52.616099 8 log.go:172] (0xc001bea000) (3) Data frame handling I0128 11:16:52.616132 8 log.go:172] (0xc001bea000) (3) Data frame sent I0128 11:16:52.729879 8 log.go:172] (0xc001d68840) Data frame received for 1 I0128 11:16:52.729990 8 log.go:172] (0xc00203a500) (1) Data frame handling I0128 11:16:52.730016 8 log.go:172] (0xc00203a500) (1) Data frame sent I0128 11:16:52.730060 8 log.go:172] (0xc001d68840) (0xc00203a500) Stream removed, broadcasting: 1 I0128 11:16:52.730274 8 log.go:172] (0xc001d68840) (0xc001bea000) Stream removed, broadcasting: 3 I0128 11:16:52.730320 8 log.go:172] (0xc001d68840) (0xc00203a5a0) Stream removed, broadcasting: 5 I0128 11:16:52.730367 8 log.go:172] (0xc001d68840) (0xc00203a500) Stream removed, broadcasting: 1 I0128 11:16:52.730381 8 log.go:172] (0xc001d68840) (0xc001bea000) Stream removed, broadcasting: 3 I0128 11:16:52.730389 8 log.go:172] (0xc001d68840) (0xc00203a5a0) Stream removed, broadcasting: 5 Jan 28 11:16:52.731: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount I0128 11:16:52.731729 8 log.go:172] (0xc001d68840) Go away received Jan 28 11:16:52.731: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:52.731: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:52.813196 8 log.go:172] (0xc0000eb550) (0xc0021301e0) Create stream I0128 11:16:52.813315 8 log.go:172] (0xc0000eb550) (0xc0021301e0) Stream added, broadcasting: 1 I0128 11:16:52.816375 8 log.go:172] (0xc0000eb550) Reply frame received for 1 I0128 11:16:52.816444 8 log.go:172] (0xc0000eb550) (0xc0015f8000) Create stream I0128 11:16:52.816466 8 log.go:172] (0xc0000eb550) (0xc0015f8000) Stream added, broadcasting: 3 I0128 11:16:52.817498 8 log.go:172] (0xc0000eb550) Reply frame received for 3 I0128 11:16:52.817540 8 log.go:172] (0xc0000eb550) (0xc001bea0a0) Create stream I0128 11:16:52.817553 8 log.go:172] (0xc0000eb550) (0xc001bea0a0) Stream added, broadcasting: 5 I0128 11:16:52.818482 8 log.go:172] (0xc0000eb550) Reply frame received for 5 I0128 11:16:52.935045 8 log.go:172] (0xc0000eb550) Data frame received for 3 I0128 11:16:52.935151 8 log.go:172] (0xc0015f8000) (3) Data frame handling I0128 11:16:52.935196 8 log.go:172] (0xc0015f8000) (3) Data frame sent I0128 11:16:53.040555 8 log.go:172] (0xc0000eb550) Data frame received for 1 I0128 11:16:53.040691 8 log.go:172] (0xc0000eb550) (0xc0015f8000) Stream removed, broadcasting: 3 I0128 11:16:53.040744 8 log.go:172] (0xc0021301e0) (1) Data frame handling I0128 11:16:53.040784 8 log.go:172] (0xc0021301e0) (1) Data frame sent I0128 11:16:53.040813 8 log.go:172] (0xc0000eb550) (0xc001bea0a0) Stream removed, broadcasting: 5 I0128 11:16:53.040863 8 log.go:172] (0xc0000eb550) (0xc0021301e0) Stream removed, broadcasting: 1 I0128 11:16:53.040913 8 log.go:172] (0xc0000eb550) Go away received I0128 11:16:53.041312 8 log.go:172] (0xc0000eb550) (0xc0021301e0) Stream removed, broadcasting: 1 I0128 11:16:53.041343 8 log.go:172] (0xc0000eb550) (0xc0015f8000) Stream removed, broadcasting: 3 I0128 11:16:53.041356 8 log.go:172] (0xc0000eb550) (0xc001bea0a0) Stream removed, broadcasting: 5 Jan 28 11:16:53.041: INFO: Exec stderr: "" Jan 28 11:16:53.041: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:53.041: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:53.141770 8 log.go:172] (0xc0000eba20) (0xc002130460) Create stream I0128 11:16:53.141928 8 log.go:172] (0xc0000eba20) (0xc002130460) Stream added, broadcasting: 1 I0128 11:16:53.185100 8 log.go:172] (0xc0000eba20) Reply frame received for 1 I0128 11:16:53.185248 8 log.go:172] (0xc0000eba20) (0xc001f48140) Create stream I0128 11:16:53.185265 8 log.go:172] (0xc0000eba20) (0xc001f48140) Stream added, broadcasting: 3 I0128 11:16:53.186101 8 log.go:172] (0xc0000eba20) Reply frame received for 3 I0128 11:16:53.186121 8 log.go:172] (0xc0000eba20) (0xc00203a6e0) Create stream I0128 11:16:53.186134 8 log.go:172] (0xc0000eba20) (0xc00203a6e0) Stream added, broadcasting: 5 I0128 11:16:53.186959 8 log.go:172] (0xc0000eba20) Reply frame received for 5 I0128 11:16:53.286926 8 log.go:172] (0xc0000eba20) Data frame received for 3 I0128 11:16:53.287049 8 log.go:172] (0xc001f48140) (3) Data frame handling I0128 11:16:53.287081 8 log.go:172] (0xc001f48140) (3) Data frame sent I0128 11:16:53.400607 8 log.go:172] (0xc0000eba20) Data frame received for 1 I0128 11:16:53.400697 8 log.go:172] (0xc002130460) (1) Data frame handling I0128 11:16:53.400716 8 log.go:172] (0xc002130460) (1) Data frame sent I0128 11:16:53.400763 8 log.go:172] (0xc0000eba20) (0xc002130460) Stream removed, broadcasting: 1 I0128 11:16:53.400899 8 log.go:172] (0xc0000eba20) (0xc001f48140) Stream removed, broadcasting: 3 I0128 11:16:53.400987 8 log.go:172] (0xc0000eba20) (0xc00203a6e0) Stream removed, broadcasting: 5 I0128 11:16:53.401027 8 log.go:172] (0xc0000eba20) (0xc002130460) Stream removed, broadcasting: 1 I0128 11:16:53.401038 8 log.go:172] (0xc0000eba20) (0xc001f48140) Stream removed, broadcasting: 3 I0128 11:16:53.401060 8 log.go:172] (0xc0000eba20) (0xc00203a6e0) Stream removed, broadcasting: 5 I0128 11:16:53.401263 8 log.go:172] (0xc0000eba20) Go away received Jan 28 11:16:53.401: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 28 11:16:53.401: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:53.401: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:53.529461 8 log.go:172] (0xc0021c64d0) (0xc001f485a0) Create stream I0128 11:16:53.529669 8 log.go:172] (0xc0021c64d0) (0xc001f485a0) Stream added, broadcasting: 1 I0128 11:16:53.533874 8 log.go:172] (0xc0021c64d0) Reply frame received for 1 I0128 11:16:53.533905 8 log.go:172] (0xc0021c64d0) (0xc0015f80a0) Create stream I0128 11:16:53.533916 8 log.go:172] (0xc0021c64d0) (0xc0015f80a0) Stream added, broadcasting: 3 I0128 11:16:53.535030 8 log.go:172] (0xc0021c64d0) Reply frame received for 3 I0128 11:16:53.535053 8 log.go:172] (0xc0021c64d0) (0xc0015f8140) Create stream I0128 11:16:53.535060 8 log.go:172] (0xc0021c64d0) (0xc0015f8140) Stream added, broadcasting: 5 I0128 11:16:53.535813 8 log.go:172] (0xc0021c64d0) Reply frame received for 5 I0128 11:16:53.668682 8 log.go:172] (0xc0021c64d0) Data frame received for 3 I0128 11:16:53.668798 8 log.go:172] (0xc0015f80a0) (3) Data frame handling I0128 11:16:53.668821 8 log.go:172] (0xc0015f80a0) (3) Data frame sent I0128 11:16:53.856658 8 log.go:172] (0xc0021c64d0) (0xc0015f80a0) Stream removed, broadcasting: 3 I0128 11:16:53.857016 8 log.go:172] (0xc0021c64d0) Data frame received for 1 I0128 11:16:53.857048 8 log.go:172] (0xc001f485a0) (1) Data frame handling I0128 11:16:53.857074 8 log.go:172] (0xc001f485a0) (1) Data frame sent I0128 11:16:53.857504 8 log.go:172] (0xc0021c64d0) (0xc001f485a0) Stream removed, broadcasting: 1 I0128 11:16:53.858238 8 log.go:172] (0xc0021c64d0) (0xc0015f8140) Stream removed, broadcasting: 5 I0128 11:16:53.858300 8 log.go:172] (0xc0021c64d0) Go away received I0128 11:16:53.858729 8 log.go:172] (0xc0021c64d0) (0xc001f485a0) Stream removed, broadcasting: 1 I0128 11:16:53.858765 8 log.go:172] (0xc0021c64d0) (0xc0015f80a0) Stream removed, broadcasting: 3 I0128 11:16:53.858779 8 log.go:172] (0xc0021c64d0) (0xc0015f8140) Stream removed, broadcasting: 5 Jan 28 11:16:53.858: INFO: Exec stderr: "" Jan 28 11:16:53.859: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:53.859: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:53.965734 8 log.go:172] (0xc002150370) (0xc0015f83c0) Create stream I0128 11:16:53.966076 8 log.go:172] (0xc002150370) (0xc0015f83c0) Stream added, broadcasting: 1 I0128 11:16:53.991623 8 log.go:172] (0xc002150370) Reply frame received for 1 I0128 11:16:53.991750 8 log.go:172] (0xc002150370) (0xc001f486e0) Create stream I0128 11:16:53.991771 8 log.go:172] (0xc002150370) (0xc001f486e0) Stream added, broadcasting: 3 I0128 11:16:53.993071 8 log.go:172] (0xc002150370) Reply frame received for 3 I0128 11:16:53.993118 8 log.go:172] (0xc002150370) (0xc001bea140) Create stream I0128 11:16:53.993132 8 log.go:172] (0xc002150370) (0xc001bea140) Stream added, broadcasting: 5 I0128 11:16:53.995490 8 log.go:172] (0xc002150370) Reply frame received for 5 I0128 11:16:54.131300 8 log.go:172] (0xc002150370) Data frame received for 3 I0128 11:16:54.131426 8 log.go:172] (0xc001f486e0) (3) Data frame handling I0128 11:16:54.131450 8 log.go:172] (0xc001f486e0) (3) Data frame sent I0128 11:16:54.266411 8 log.go:172] (0xc002150370) Data frame received for 1 I0128 11:16:54.266486 8 log.go:172] (0xc0015f83c0) (1) Data frame handling I0128 11:16:54.266508 8 log.go:172] (0xc0015f83c0) (1) Data frame sent I0128 11:16:54.266656 8 log.go:172] (0xc002150370) (0xc0015f83c0) Stream removed, broadcasting: 1 I0128 11:16:54.267286 8 log.go:172] (0xc002150370) (0xc001f486e0) Stream removed, broadcasting: 3 I0128 11:16:54.268057 8 log.go:172] (0xc002150370) (0xc001bea140) Stream removed, broadcasting: 5 I0128 11:16:54.268091 8 log.go:172] (0xc002150370) Go away received I0128 11:16:54.268584 8 log.go:172] (0xc002150370) (0xc0015f83c0) Stream removed, broadcasting: 1 I0128 11:16:54.268717 8 log.go:172] (0xc002150370) (0xc001f486e0) Stream removed, broadcasting: 3 I0128 11:16:54.268750 8 log.go:172] (0xc002150370) (0xc001bea140) Stream removed, broadcasting: 5 Jan 28 11:16:54.268: INFO: Exec stderr: "" Jan 28 11:16:54.268: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:54.269: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:54.359031 8 log.go:172] (0xc001d68d10) (0xc00203a8c0) Create stream I0128 11:16:54.359243 8 log.go:172] (0xc001d68d10) (0xc00203a8c0) Stream added, broadcasting: 1 I0128 11:16:54.373733 8 log.go:172] (0xc001d68d10) Reply frame received for 1 I0128 11:16:54.373781 8 log.go:172] (0xc001d68d10) (0xc001f48780) Create stream I0128 11:16:54.373794 8 log.go:172] (0xc001d68d10) (0xc001f48780) Stream added, broadcasting: 3 I0128 11:16:54.375732 8 log.go:172] (0xc001d68d10) Reply frame received for 3 I0128 11:16:54.375763 8 log.go:172] (0xc001d68d10) (0xc001bea280) Create stream I0128 11:16:54.375774 8 log.go:172] (0xc001d68d10) (0xc001bea280) Stream added, broadcasting: 5 I0128 11:16:54.377601 8 log.go:172] (0xc001d68d10) Reply frame received for 5 I0128 11:16:54.540180 8 log.go:172] (0xc001d68d10) Data frame received for 3 I0128 11:16:54.540408 8 log.go:172] (0xc001f48780) (3) Data frame handling I0128 11:16:54.540435 8 log.go:172] (0xc001f48780) (3) Data frame sent I0128 11:16:54.714505 8 log.go:172] (0xc001d68d10) (0xc001f48780) Stream removed, broadcasting: 3 I0128 11:16:54.714673 8 log.go:172] (0xc001d68d10) Data frame received for 1 I0128 11:16:54.714697 8 log.go:172] (0xc001d68d10) (0xc001bea280) Stream removed, broadcasting: 5 I0128 11:16:54.714745 8 log.go:172] (0xc00203a8c0) (1) Data frame handling I0128 11:16:54.714762 8 log.go:172] (0xc00203a8c0) (1) Data frame sent I0128 11:16:54.714773 8 log.go:172] (0xc001d68d10) (0xc00203a8c0) Stream removed, broadcasting: 1 I0128 11:16:54.714793 8 log.go:172] (0xc001d68d10) Go away received I0128 11:16:54.715651 8 log.go:172] (0xc001d68d10) (0xc00203a8c0) Stream removed, broadcasting: 1 I0128 11:16:54.715673 8 log.go:172] (0xc001d68d10) (0xc001f48780) Stream removed, broadcasting: 3 I0128 11:16:54.715683 8 log.go:172] (0xc001d68d10) (0xc001bea280) Stream removed, broadcasting: 5 Jan 28 11:16:54.715: INFO: Exec stderr: "" Jan 28 11:16:54.715: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-crcqb PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 11:16:54.715: INFO: >>> kubeConfig: /root/.kube/config I0128 11:16:54.792376 8 log.go:172] (0xc002150840) (0xc0015f86e0) Create stream I0128 11:16:54.792559 8 log.go:172] (0xc002150840) (0xc0015f86e0) Stream added, broadcasting: 1 I0128 11:16:54.796931 8 log.go:172] (0xc002150840) Reply frame received for 1 I0128 11:16:54.796962 8 log.go:172] (0xc002150840) (0xc001f48820) Create stream I0128 11:16:54.796969 8 log.go:172] (0xc002150840) (0xc001f48820) Stream added, broadcasting: 3 I0128 11:16:54.797890 8 log.go:172] (0xc002150840) Reply frame received for 3 I0128 11:16:54.797922 8 log.go:172] (0xc002150840) (0xc001bea320) Create stream I0128 11:16:54.797944 8 log.go:172] (0xc002150840) (0xc001bea320) Stream added, broadcasting: 5 I0128 11:16:54.799073 8 log.go:172] (0xc002150840) Reply frame received for 5 I0128 11:16:54.903382 8 log.go:172] (0xc002150840) Data frame received for 3 I0128 11:16:54.903527 8 log.go:172] (0xc001f48820) (3) Data frame handling I0128 11:16:54.903545 8 log.go:172] (0xc001f48820) (3) Data frame sent I0128 11:16:55.013797 8 log.go:172] (0xc002150840) Data frame received for 1 I0128 11:16:55.013960 8 log.go:172] (0xc002150840) (0xc001f48820) Stream removed, broadcasting: 3 I0128 11:16:55.014034 8 log.go:172] (0xc0015f86e0) (1) Data frame handling I0128 11:16:55.014059 8 log.go:172] (0xc0015f86e0) (1) Data frame sent I0128 11:16:55.014103 8 log.go:172] (0xc002150840) (0xc001bea320) Stream removed, broadcasting: 5 I0128 11:16:55.014159 8 log.go:172] (0xc002150840) (0xc0015f86e0) Stream removed, broadcasting: 1 I0128 11:16:55.014182 8 log.go:172] (0xc002150840) Go away received I0128 11:16:55.014768 8 log.go:172] (0xc002150840) (0xc0015f86e0) Stream removed, broadcasting: 1 I0128 11:16:55.014808 8 log.go:172] (0xc002150840) (0xc001f48820) Stream removed, broadcasting: 3 I0128 11:16:55.014829 8 log.go:172] (0xc002150840) (0xc001bea320) Stream removed, broadcasting: 5 Jan 28 11:16:55.014: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:16:55.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-crcqb" for this suite. Jan 28 11:17:45.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:17:45.167: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-crcqb, resource: bindings, ignored listing per whitelist Jan 28 11:17:45.214: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-crcqb deletion completed in 50.183862237s • [SLOW TEST:76.513 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:17:45.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-cef08f8b-41bf-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 11:17:45.484: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-z55qb" to be "success or failure" Jan 28 11:17:45.490: INFO: Pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.945833ms Jan 28 11:17:47.770: INFO: Pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286379756s Jan 28 11:17:49.800: INFO: Pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316339172s Jan 28 11:17:52.032: INFO: Pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548309633s Jan 28 11:17:54.199: INFO: Pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.715223844s Jan 28 11:17:56.221: INFO: Pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.736988843s STEP: Saw pod success Jan 28 11:17:56.221: INFO: Pod "pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:17:56.227: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 28 11:17:56.351: INFO: Waiting for pod pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005 to disappear Jan 28 11:17:56.439: INFO: Pod pod-projected-configmaps-cef1aaa4-41bf-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:17:56.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z55qb" for this suite. Jan 28 11:18:02.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:18:02.597: INFO: namespace: e2e-tests-projected-z55qb, resource: bindings, ignored listing per whitelist Jan 28 11:18:02.677: INFO: namespace e2e-tests-projected-z55qb deletion completed in 6.224157149s • [SLOW TEST:17.463 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:18:02.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 28 11:18:03.388: INFO: Waiting up to 5m0s for pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7" in namespace "e2e-tests-svcaccounts-skthn" to be "success or failure" Jan 28 11:18:03.470: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 82.244861ms Jan 28 11:18:05.714: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325283573s Jan 28 11:18:07.736: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347984719s Jan 28 11:18:10.127: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.739138837s Jan 28 11:18:12.703: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.314956329s Jan 28 11:18:14.736: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.348183997s Jan 28 11:18:16.750: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.3620625s Jan 28 11:18:18.775: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.386598028s Jan 28 11:18:20.790: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.402105153s STEP: Saw pod success Jan 28 11:18:20.791: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7" satisfied condition "success or failure" Jan 28 11:18:20.794: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7 container token-test: STEP: delete the pod Jan 28 11:18:20.880: INFO: Waiting for pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7 to disappear Jan 28 11:18:20.901: INFO: Pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-hrmr7 no longer exists STEP: Creating a pod to test consume service account root CA Jan 28 11:18:20.919: INFO: Waiting up to 5m0s for pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68" in namespace "e2e-tests-svcaccounts-skthn" to be "success or failure" Jan 28 11:18:21.081: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Pending", Reason="", readiness=false. Elapsed: 161.433624ms Jan 28 11:18:23.184: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264321689s Jan 28 11:18:25.974: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053785974s Jan 28 11:18:27.992: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Pending", Reason="", readiness=false. Elapsed: 7.072778834s Jan 28 11:18:30.158: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Pending", Reason="", readiness=false. Elapsed: 9.237998032s Jan 28 11:18:32.197: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Pending", Reason="", readiness=false. Elapsed: 11.276881073s Jan 28 11:18:34.296: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Pending", Reason="", readiness=false. Elapsed: 13.376036653s Jan 28 11:18:36.305: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.385076213s STEP: Saw pod success Jan 28 11:18:36.305: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68" satisfied condition "success or failure" Jan 28 11:18:36.313: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68 container root-ca-test: STEP: delete the pod Jan 28 11:18:36.947: INFO: Waiting for pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68 to disappear Jan 28 11:18:37.028: INFO: Pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-krj68 no longer exists STEP: Creating a pod to test consume service account namespace Jan 28 11:18:37.080: INFO: Waiting up to 5m0s for pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42" in namespace "e2e-tests-svcaccounts-skthn" to be "success or failure" Jan 28 11:18:37.211: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Pending", Reason="", readiness=false. Elapsed: 130.039181ms Jan 28 11:18:39.223: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142719895s Jan 28 11:18:41.247: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166490528s Jan 28 11:18:43.294: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212992705s Jan 28 11:18:45.812: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.731084086s Jan 28 11:18:47.971: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.890399557s Jan 28 11:18:49.992: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Pending", Reason="", readiness=false. Elapsed: 12.910960483s Jan 28 11:18:52.004: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.923314876s STEP: Saw pod success Jan 28 11:18:52.004: INFO: Pod "pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42" satisfied condition "success or failure" Jan 28 11:18:52.008: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42 container namespace-test: STEP: delete the pod Jan 28 11:18:52.729: INFO: Waiting for pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42 to disappear Jan 28 11:18:53.118: INFO: Pod pod-service-account-d997b953-41bf-11ea-a04a-0242ac110005-zkx42 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:18:53.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-skthn" for this suite. Jan 28 11:19:01.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:19:01.239: INFO: namespace: e2e-tests-svcaccounts-skthn, resource: bindings, ignored listing per whitelist Jan 28 11:19:01.428: INFO: namespace e2e-tests-svcaccounts-skthn deletion completed in 8.291199882s • [SLOW TEST:58.750 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:19:01.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 28 11:19:01.646: INFO: Waiting up to 5m0s for pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-k556w" to be "success or failure" Jan 28 11:19:01.781: INFO: Pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 134.891615ms Jan 28 11:19:03.796: INFO: Pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1504333s Jan 28 11:19:05.813: INFO: Pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16777292s Jan 28 11:19:08.280: INFO: Pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63409249s Jan 28 11:19:10.294: INFO: Pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.648151398s Jan 28 11:19:12.338: INFO: Pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.692734508s STEP: Saw pod success Jan 28 11:19:12.339: INFO: Pod "pod-fc5580cd-41bf-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:19:12.344: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fc5580cd-41bf-11ea-a04a-0242ac110005 container test-container: STEP: delete the pod Jan 28 11:19:12.869: INFO: Waiting for pod pod-fc5580cd-41bf-11ea-a04a-0242ac110005 to disappear Jan 28 11:19:13.187: INFO: Pod pod-fc5580cd-41bf-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:19:13.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k556w" for this suite. Jan 28 11:19:19.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:19:19.546: INFO: namespace: e2e-tests-emptydir-k556w, resource: bindings, ignored listing per whitelist Jan 28 11:19:19.570: INFO: namespace e2e-tests-emptydir-k556w deletion completed in 6.305237672s • [SLOW TEST:18.142 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:19:19.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 28 11:19:19.746: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 28 11:19:19.778: INFO: Waiting for terminating namespaces to be deleted... Jan 28 11:19:19.782: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 28 11:19:19.806: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 28 11:19:19.806: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 28 11:19:19.806: INFO: Container coredns ready: true, restart count 0 Jan 28 11:19:19.807: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 28 11:19:19.807: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 11:19:19.807: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 28 11:19:19.807: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 28 11:19:19.807: INFO: Container weave ready: true, restart count 0 Jan 28 11:19:19.807: INFO: Container weave-npc ready: true, restart count 0 Jan 28 11:19:19.807: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 28 11:19:19.807: INFO: Container coredns ready: true, restart count 0 Jan 28 11:19:19.807: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 28 11:19:19.807: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0d335e99-41c0-11ea-a04a-0242ac110005 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0d335e99-41c0-11ea-a04a-0242ac110005 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-0d335e99-41c0-11ea-a04a-0242ac110005 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:19:42.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-6nxgx" for this suite. Jan 28 11:19:54.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:19:54.517: INFO: namespace: e2e-tests-sched-pred-6nxgx, resource: bindings, ignored listing per whitelist Jan 28 11:19:54.621: INFO: namespace e2e-tests-sched-pred-6nxgx deletion completed in 12.323900702s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:35.051 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:19:54.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 28 11:19:54.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-8d95k" to be "success or failure" Jan 28 11:19:54.827: INFO: Pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 95.205987ms Jan 28 11:19:56.849: INFO: Pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117048116s Jan 28 11:19:58.880: INFO: Pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148232509s Jan 28 11:20:01.848: INFO: Pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.115578199s Jan 28 11:20:04.168: INFO: Pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.435854854s Jan 28 11:20:06.192: INFO: Pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.46010806s STEP: Saw pod success Jan 28 11:20:06.193: INFO: Pod "downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:20:06.210: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005 container client-container: STEP: delete the pod Jan 28 11:20:06.616: INFO: Waiting for pod downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005 to disappear Jan 28 11:20:06.736: INFO: Pod downwardapi-volume-1bf95c6c-41c0-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:20:06.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8d95k" for this suite. Jan 28 11:20:14.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:20:14.369: INFO: namespace: e2e-tests-downward-api-8d95k, resource: bindings, ignored listing per whitelist Jan 28 11:20:14.694: INFO: namespace e2e-tests-downward-api-8d95k deletion completed in 7.716326901s • [SLOW TEST:20.073 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:20:14.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-jlcw7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jlcw7 to expose endpoints map[] Jan 28 11:20:15.149: INFO: Get endpoints failed (33.676878ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 28 11:20:16.165: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jlcw7 exposes endpoints map[] (1.050014354s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-jlcw7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jlcw7 to expose endpoints map[pod1:[80]] Jan 28 11:20:20.332: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.147512415s elapsed, will retry) Jan 28 11:20:23.562: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jlcw7 exposes endpoints map[pod1:[80]] (7.378053627s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-jlcw7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jlcw7 to expose endpoints map[pod1:[80] pod2:[80]] Jan 28 11:20:29.075: INFO: Unexpected endpoints: found map[28c576e4-41c0-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.391291197s elapsed, will retry) Jan 28 11:20:34.729: INFO: Unexpected endpoints: found map[28c576e4-41c0-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (11.046173395s elapsed, will retry) Jan 28 11:20:36.800: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jlcw7 exposes endpoints map[pod1:[80] pod2:[80]] (13.116248615s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-jlcw7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jlcw7 to expose endpoints map[pod2:[80]] Jan 28 11:20:37.009: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jlcw7 exposes endpoints map[pod2:[80]] (190.675982ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-jlcw7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jlcw7 to expose endpoints map[] Jan 28 11:20:37.203: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jlcw7 exposes endpoints map[] (102.408188ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:20:37.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-jlcw7" for this suite. Jan 28 11:20:45.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:20:45.228: INFO: namespace: e2e-tests-services-jlcw7, resource: bindings, ignored listing per whitelist Jan 28 11:20:45.247: INFO: namespace e2e-tests-services-jlcw7 deletion completed in 6.616129079s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:30.552 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:20:45.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 28 11:20:45.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v9xgm' Jan 28 11:20:47.917: INFO: stderr: "" Jan 28 11:20:47.917: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 28 11:20:49.565: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:49.566: INFO: Found 0 / 1 Jan 28 11:20:49.927: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:49.927: INFO: Found 0 / 1 Jan 28 11:20:50.995: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:50.995: INFO: Found 0 / 1 Jan 28 11:20:51.934: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:51.935: INFO: Found 0 / 1 Jan 28 11:20:53.579: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:53.580: INFO: Found 0 / 1 Jan 28 11:20:53.967: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:53.968: INFO: Found 0 / 1 Jan 28 11:20:54.933: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:54.933: INFO: Found 0 / 1 Jan 28 11:20:55.939: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:55.939: INFO: Found 1 / 1 Jan 28 11:20:55.939: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 28 11:20:55.948: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:20:55.948: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 28 11:20:55.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ddck8 redis-master --namespace=e2e-tests-kubectl-v9xgm' Jan 28 11:20:56.188: INFO: stderr: "" Jan 28 11:20:56.188: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Jan 11:20:55.252 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Jan 11:20:55.252 # Server started, Redis version 3.2.12\n1:M 28 Jan 11:20:55.252 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Jan 11:20:55.252 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 28 11:20:56.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ddck8 redis-master --namespace=e2e-tests-kubectl-v9xgm --tail=1' Jan 28 11:20:56.346: INFO: stderr: "" Jan 28 11:20:56.346: INFO: stdout: "1:M 28 Jan 11:20:55.252 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 28 11:20:56.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ddck8 redis-master --namespace=e2e-tests-kubectl-v9xgm --limit-bytes=1' Jan 28 11:20:56.608: INFO: stderr: "" Jan 28 11:20:56.608: INFO: stdout: " " STEP: exposing timestamps Jan 28 11:20:56.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ddck8 redis-master --namespace=e2e-tests-kubectl-v9xgm --tail=1 --timestamps' Jan 28 11:20:56.788: INFO: stderr: "" Jan 28 11:20:56.788: INFO: stdout: "2020-01-28T11:20:55.253278216Z 1:M 28 Jan 11:20:55.252 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 28 11:20:59.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ddck8 redis-master --namespace=e2e-tests-kubectl-v9xgm --since=1s' Jan 28 11:20:59.517: INFO: stderr: "" Jan 28 11:20:59.517: INFO: stdout: "" Jan 28 11:20:59.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ddck8 redis-master --namespace=e2e-tests-kubectl-v9xgm --since=24h' Jan 28 11:20:59.827: INFO: stderr: "" Jan 28 11:20:59.828: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Jan 11:20:55.252 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Jan 11:20:55.252 # Server started, Redis version 3.2.12\n1:M 28 Jan 11:20:55.252 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Jan 11:20:55.252 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 28 11:20:59.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v9xgm' Jan 28 11:21:00.064: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 11:21:00.064: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 28 11:21:00.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-v9xgm' Jan 28 11:21:00.284: INFO: stderr: "No resources found.\n" Jan 28 11:21:00.284: INFO: stdout: "" Jan 28 11:21:00.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-v9xgm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 11:21:00.446: INFO: stderr: "" Jan 28 11:21:00.446: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:21:00.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v9xgm" for this suite. Jan 28 11:21:24.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:21:24.662: INFO: namespace: e2e-tests-kubectl-v9xgm, resource: bindings, ignored listing per whitelist Jan 28 11:21:24.764: INFO: namespace e2e-tests-kubectl-v9xgm deletion completed in 24.299143991s • [SLOW TEST:39.517 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:21:24.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jan 28 11:21:25.598: INFO: created pod pod-service-account-defaultsa Jan 28 11:21:25.598: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 28 11:21:25.611: INFO: created pod pod-service-account-mountsa Jan 28 11:21:25.611: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 28 11:21:25.645: INFO: created pod pod-service-account-nomountsa Jan 28 11:21:25.645: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 28 11:21:25.812: INFO: created pod pod-service-account-defaultsa-mountspec Jan 28 11:21:25.812: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 28 11:21:25.889: INFO: created pod pod-service-account-mountsa-mountspec Jan 28 11:21:25.889: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 28 11:21:25.986: INFO: created pod pod-service-account-nomountsa-mountspec Jan 28 11:21:25.986: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 28 11:21:26.046: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 28 11:21:26.046: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 28 11:21:26.111: INFO: created pod pod-service-account-mountsa-nomountspec Jan 28 11:21:26.112: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 28 11:21:26.274: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 28 11:21:26.274: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:21:26.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-stqjb" for this suite. Jan 28 11:21:55.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:21:55.153: INFO: namespace: e2e-tests-svcaccounts-stqjb, resource: bindings, ignored listing per whitelist Jan 28 11:21:55.292: INFO: namespace e2e-tests-svcaccounts-stqjb deletion completed in 28.084601041s • [SLOW TEST:30.527 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:21:55.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:21:55.688: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jan 28 11:21:55.722: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hzczl/daemonsets","resourceVersion":"19738169"},"items":null} Jan 28 11:21:55.731: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hzczl/pods","resourceVersion":"19738169"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:21:55.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-hzczl" for this suite. Jan 28 11:22:01.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:22:02.033: INFO: namespace: e2e-tests-daemonsets-hzczl, resource: bindings, ignored listing per whitelist Jan 28 11:22:02.245: INFO: namespace e2e-tests-daemonsets-hzczl deletion completed in 6.391298064s S [SKIPPING] [6.951 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:21:55.688: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:22:02.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6820a743-41c0-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 11:22:02.572: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-nps4l" to be "success or failure" Jan 28 11:22:02.619: INFO: Pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.808521ms Jan 28 11:22:04.798: INFO: Pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226392018s Jan 28 11:22:06.823: INFO: Pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250892649s Jan 28 11:22:08.840: INFO: Pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268496037s Jan 28 11:22:11.459: INFO: Pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.886735041s Jan 28 11:22:13.480: INFO: Pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.90847232s STEP: Saw pod success Jan 28 11:22:13.481: INFO: Pod "pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:22:13.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 28 11:22:13.577: INFO: Waiting for pod pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005 to disappear Jan 28 11:22:13.590: INFO: Pod pod-projected-configmaps-682bba03-41c0-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:22:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nps4l" for this suite. Jan 28 11:22:19.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:22:19.835: INFO: namespace: e2e-tests-projected-nps4l, resource: bindings, ignored listing per whitelist Jan 28 11:22:19.876: INFO: namespace e2e-tests-projected-nps4l deletion completed in 6.272298363s • [SLOW TEST:17.631 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:22:19.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-729763d4-41c0-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 28 11:22:20.085: INFO: Waiting up to 5m0s for pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-j4pmn" to be "success or failure" Jan 28 11:22:20.145: INFO: Pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 59.522397ms Jan 28 11:22:22.174: INFO: Pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08878103s Jan 28 11:22:24.188: INFO: Pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103404625s Jan 28 11:22:26.310: INFO: Pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224933995s Jan 28 11:22:28.328: INFO: Pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24337474s Jan 28 11:22:30.345: INFO: Pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.259954964s STEP: Saw pod success Jan 28 11:22:30.345: INFO: Pod "pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:22:30.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 28 11:22:30.843: INFO: Waiting for pod pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005 to disappear Jan 28 11:22:31.117: INFO: Pod pod-secrets-7298943c-41c0-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:22:31.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-j4pmn" for this suite. Jan 28 11:22:37.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:22:37.300: INFO: namespace: e2e-tests-secrets-j4pmn, resource: bindings, ignored listing per whitelist Jan 28 11:22:37.443: INFO: namespace e2e-tests-secrets-j4pmn deletion completed in 6.314364315s • [SLOW TEST:17.567 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:22:37.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-wmb9g [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-wmb9g STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-wmb9g Jan 28 11:22:37.838: INFO: Found 0 stateful pods, waiting for 1 Jan 28 11:22:47.869: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 28 11:22:47.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 11:22:48.622: INFO: stderr: "I0128 11:22:48.164097 1272 log.go:172] (0xc00087e2c0) (0xc00071a640) Create stream\nI0128 11:22:48.164486 1272 log.go:172] (0xc00087e2c0) (0xc00071a640) Stream added, broadcasting: 1\nI0128 11:22:48.176092 1272 log.go:172] (0xc00087e2c0) Reply frame received for 1\nI0128 11:22:48.176177 1272 log.go:172] (0xc00087e2c0) (0xc00062ed20) Create stream\nI0128 11:22:48.176197 1272 log.go:172] (0xc00087e2c0) (0xc00062ed20) Stream added, broadcasting: 3\nI0128 11:22:48.178186 1272 log.go:172] (0xc00087e2c0) Reply frame received for 3\nI0128 11:22:48.178258 1272 log.go:172] (0xc00087e2c0) (0xc00062ee60) Create stream\nI0128 11:22:48.178277 1272 log.go:172] (0xc00087e2c0) (0xc00062ee60) Stream added, broadcasting: 5\nI0128 11:22:48.181138 1272 log.go:172] (0xc00087e2c0) Reply frame received for 5\nI0128 11:22:48.408031 1272 log.go:172] (0xc00087e2c0) Data frame received for 3\nI0128 11:22:48.408102 1272 log.go:172] (0xc00062ed20) (3) Data frame handling\nI0128 11:22:48.408130 1272 log.go:172] (0xc00062ed20) (3) Data frame sent\nI0128 11:22:48.606241 1272 log.go:172] (0xc00087e2c0) (0xc00062ed20) Stream removed, broadcasting: 3\nI0128 11:22:48.606482 1272 log.go:172] (0xc00087e2c0) Data frame received for 1\nI0128 11:22:48.606540 1272 log.go:172] (0xc00071a640) (1) Data frame handling\nI0128 11:22:48.606608 1272 log.go:172] (0xc00071a640) (1) Data frame sent\nI0128 11:22:48.606796 1272 log.go:172] (0xc00087e2c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0128 11:22:48.606848 1272 log.go:172] (0xc00087e2c0) (0xc00062ee60) Stream removed, broadcasting: 5\nI0128 11:22:48.606907 1272 log.go:172] (0xc00087e2c0) Go away received\nI0128 11:22:48.607854 1272 log.go:172] (0xc00087e2c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0128 11:22:48.607869 1272 log.go:172] (0xc00087e2c0) (0xc00062ed20) Stream removed, broadcasting: 3\nI0128 11:22:48.607873 1272 log.go:172] (0xc00087e2c0) (0xc00062ee60) Stream removed, broadcasting: 5\n" Jan 28 11:22:48.623: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 11:22:48.623: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 11:22:48.640: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 28 11:22:58.675: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 11:22:58.675: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 11:22:58.758: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:22:58.759: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:22:58.759: INFO: Jan 28 11:22:58.759: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 28 11:23:00.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.959283987s Jan 28 11:23:01.774: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.343150034s Jan 28 11:23:02.789: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.943832945s Jan 28 11:23:03.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.929202022s Jan 28 11:23:04.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.889460446s Jan 28 11:23:06.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.857706397s Jan 28 11:23:07.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.566787489s Jan 28 11:23:08.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 512.252808ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-wmb9g Jan 28 11:23:09.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:23:09.830: INFO: stderr: "I0128 11:23:09.551768 1295 log.go:172] (0xc0001386e0) (0xc0006d6640) Create stream\nI0128 11:23:09.551944 1295 log.go:172] (0xc0001386e0) (0xc0006d6640) Stream added, broadcasting: 1\nI0128 11:23:09.557456 1295 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0128 11:23:09.557497 1295 log.go:172] (0xc0001386e0) (0xc00048cd20) Create stream\nI0128 11:23:09.557510 1295 log.go:172] (0xc0001386e0) (0xc00048cd20) Stream added, broadcasting: 3\nI0128 11:23:09.558485 1295 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0128 11:23:09.558506 1295 log.go:172] (0xc0001386e0) (0xc000664000) Create stream\nI0128 11:23:09.558519 1295 log.go:172] (0xc0001386e0) (0xc000664000) Stream added, broadcasting: 5\nI0128 11:23:09.559528 1295 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0128 11:23:09.661101 1295 log.go:172] (0xc0001386e0) Data frame received for 3\nI0128 11:23:09.661169 1295 log.go:172] (0xc00048cd20) (3) Data frame handling\nI0128 11:23:09.661187 1295 log.go:172] (0xc00048cd20) (3) Data frame sent\nI0128 11:23:09.812874 1295 log.go:172] (0xc0001386e0) Data frame received for 1\nI0128 11:23:09.813008 1295 log.go:172] (0xc0006d6640) (1) Data frame handling\nI0128 11:23:09.813064 1295 log.go:172] (0xc0006d6640) (1) Data frame sent\nI0128 11:23:09.814980 1295 log.go:172] (0xc0001386e0) (0xc0006d6640) Stream removed, broadcasting: 1\nI0128 11:23:09.815606 1295 log.go:172] (0xc0001386e0) (0xc000664000) Stream removed, broadcasting: 5\nI0128 11:23:09.815682 1295 log.go:172] (0xc0001386e0) (0xc00048cd20) Stream removed, broadcasting: 3\nI0128 11:23:09.815725 1295 log.go:172] (0xc0001386e0) Go away received\nI0128 11:23:09.816153 1295 log.go:172] (0xc0001386e0) (0xc0006d6640) Stream removed, broadcasting: 1\nI0128 11:23:09.816192 1295 log.go:172] (0xc0001386e0) (0xc00048cd20) Stream removed, broadcasting: 3\nI0128 11:23:09.816210 1295 log.go:172] (0xc0001386e0) (0xc000664000) Stream removed, broadcasting: 5\n" Jan 28 11:23:09.831: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 28 11:23:09.831: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 28 11:23:09.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:23:10.311: INFO: stderr: "I0128 11:23:10.033552 1316 log.go:172] (0xc000138790) (0xc00058b220) Create stream\nI0128 11:23:10.033648 1316 log.go:172] (0xc000138790) (0xc00058b220) Stream added, broadcasting: 1\nI0128 11:23:10.040663 1316 log.go:172] (0xc000138790) Reply frame received for 1\nI0128 11:23:10.040689 1316 log.go:172] (0xc000138790) (0xc0003a4000) Create stream\nI0128 11:23:10.040695 1316 log.go:172] (0xc000138790) (0xc0003a4000) Stream added, broadcasting: 3\nI0128 11:23:10.041964 1316 log.go:172] (0xc000138790) Reply frame received for 3\nI0128 11:23:10.041997 1316 log.go:172] (0xc000138790) (0xc00058b2c0) Create stream\nI0128 11:23:10.042010 1316 log.go:172] (0xc000138790) (0xc00058b2c0) Stream added, broadcasting: 5\nI0128 11:23:10.043595 1316 log.go:172] (0xc000138790) Reply frame received for 5\nI0128 11:23:10.193489 1316 log.go:172] (0xc000138790) Data frame received for 5\nI0128 11:23:10.193539 1316 log.go:172] (0xc00058b2c0) (5) Data frame handling\nI0128 11:23:10.193556 1316 log.go:172] (0xc00058b2c0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0128 11:23:10.193581 1316 log.go:172] (0xc000138790) Data frame received for 3\nI0128 11:23:10.193590 1316 log.go:172] (0xc0003a4000) (3) Data frame handling\nI0128 11:23:10.193607 1316 log.go:172] (0xc0003a4000) (3) Data frame sent\nI0128 11:23:10.298852 1316 log.go:172] (0xc000138790) (0xc00058b2c0) Stream removed, broadcasting: 5\nI0128 11:23:10.299027 1316 log.go:172] (0xc000138790) Data frame received for 1\nI0128 11:23:10.299068 1316 log.go:172] (0xc000138790) (0xc0003a4000) Stream removed, broadcasting: 3\nI0128 11:23:10.299155 1316 log.go:172] (0xc00058b220) (1) Data frame handling\nI0128 11:23:10.299207 1316 log.go:172] (0xc00058b220) (1) Data frame sent\nI0128 11:23:10.299240 1316 log.go:172] (0xc000138790) (0xc00058b220) Stream removed, broadcasting: 1\nI0128 11:23:10.299292 1316 log.go:172] (0xc000138790) Go away received\nI0128 11:23:10.300335 1316 log.go:172] (0xc000138790) (0xc00058b220) Stream removed, broadcasting: 1\nI0128 11:23:10.300619 1316 log.go:172] (0xc000138790) (0xc0003a4000) Stream removed, broadcasting: 3\nI0128 11:23:10.300644 1316 log.go:172] (0xc000138790) (0xc00058b2c0) Stream removed, broadcasting: 5\n" Jan 28 11:23:10.312: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 28 11:23:10.312: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 28 11:23:10.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:23:11.031: INFO: stderr: "I0128 11:23:10.505935 1337 log.go:172] (0xc0001386e0) (0xc00074a5a0) Create stream\nI0128 11:23:10.506744 1337 log.go:172] (0xc0001386e0) (0xc00074a5a0) Stream added, broadcasting: 1\nI0128 11:23:10.526016 1337 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0128 11:23:10.526256 1337 log.go:172] (0xc0001386e0) (0xc000632be0) Create stream\nI0128 11:23:10.526305 1337 log.go:172] (0xc0001386e0) (0xc000632be0) Stream added, broadcasting: 3\nI0128 11:23:10.528784 1337 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0128 11:23:10.528858 1337 log.go:172] (0xc0001386e0) (0xc000346000) Create stream\nI0128 11:23:10.528905 1337 log.go:172] (0xc0001386e0) (0xc000346000) Stream added, broadcasting: 5\nI0128 11:23:10.530214 1337 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0128 11:23:10.899249 1337 log.go:172] (0xc0001386e0) Data frame received for 5\nI0128 11:23:10.899361 1337 log.go:172] (0xc000346000) (5) Data frame handling\nI0128 11:23:10.899385 1337 log.go:172] (0xc000346000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0128 11:23:10.899414 1337 log.go:172] (0xc0001386e0) Data frame received for 3\nI0128 11:23:10.899421 1337 log.go:172] (0xc000632be0) (3) Data frame handling\nI0128 11:23:10.899429 1337 log.go:172] (0xc000632be0) (3) Data frame sent\nI0128 11:23:11.018191 1337 log.go:172] (0xc0001386e0) Data frame received for 1\nI0128 11:23:11.018338 1337 log.go:172] (0xc0001386e0) (0xc000632be0) Stream removed, broadcasting: 3\nI0128 11:23:11.018418 1337 log.go:172] (0xc00074a5a0) (1) Data frame handling\nI0128 11:23:11.018492 1337 log.go:172] (0xc00074a5a0) (1) Data frame sent\nI0128 11:23:11.018500 1337 log.go:172] (0xc0001386e0) (0xc00074a5a0) Stream removed, broadcasting: 1\nI0128 11:23:11.019119 1337 log.go:172] (0xc0001386e0) (0xc000346000) Stream removed, broadcasting: 5\nI0128 11:23:11.019232 1337 log.go:172] (0xc0001386e0) Go away received\nI0128 11:23:11.019290 1337 log.go:172] (0xc0001386e0) (0xc00074a5a0) Stream removed, broadcasting: 1\nI0128 11:23:11.019305 1337 log.go:172] (0xc0001386e0) (0xc000632be0) Stream removed, broadcasting: 3\nI0128 11:23:11.019313 1337 log.go:172] (0xc0001386e0) (0xc000346000) Stream removed, broadcasting: 5\n" Jan 28 11:23:11.031: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 28 11:23:11.031: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 28 11:23:11.042: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:23:11.042: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Jan 28 11:23:21.082: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:23:21.082: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 11:23:21.082: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 28 11:23:21.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 11:23:21.684: INFO: stderr: "I0128 11:23:21.410455 1359 log.go:172] (0xc000138790) (0xc0005dd540) Create stream\nI0128 11:23:21.410888 1359 log.go:172] (0xc000138790) (0xc0005dd540) Stream added, broadcasting: 1\nI0128 11:23:21.418242 1359 log.go:172] (0xc000138790) Reply frame received for 1\nI0128 11:23:21.418307 1359 log.go:172] (0xc000138790) (0xc000748000) Create stream\nI0128 11:23:21.418326 1359 log.go:172] (0xc000138790) (0xc000748000) Stream added, broadcasting: 3\nI0128 11:23:21.420544 1359 log.go:172] (0xc000138790) Reply frame received for 3\nI0128 11:23:21.420714 1359 log.go:172] (0xc000138790) (0xc0005dd5e0) Create stream\nI0128 11:23:21.420755 1359 log.go:172] (0xc000138790) (0xc0005dd5e0) Stream added, broadcasting: 5\nI0128 11:23:21.422381 1359 log.go:172] (0xc000138790) Reply frame received for 5\nI0128 11:23:21.546984 1359 log.go:172] (0xc000138790) Data frame received for 3\nI0128 11:23:21.547185 1359 log.go:172] (0xc000748000) (3) Data frame handling\nI0128 11:23:21.547262 1359 log.go:172] (0xc000748000) (3) Data frame sent\nI0128 11:23:21.669160 1359 log.go:172] (0xc000138790) Data frame received for 1\nI0128 11:23:21.669742 1359 log.go:172] (0xc000138790) (0xc0005dd5e0) Stream removed, broadcasting: 5\nI0128 11:23:21.669816 1359 log.go:172] (0xc0005dd540) (1) Data frame handling\nI0128 11:23:21.669832 1359 log.go:172] (0xc0005dd540) (1) Data frame sent\nI0128 11:23:21.669869 1359 log.go:172] (0xc000138790) (0xc000748000) Stream removed, broadcasting: 3\nI0128 11:23:21.669949 1359 log.go:172] (0xc000138790) (0xc0005dd540) Stream removed, broadcasting: 1\nI0128 11:23:21.670055 1359 log.go:172] (0xc000138790) Go away received\nI0128 11:23:21.671612 1359 log.go:172] (0xc000138790) (0xc0005dd540) Stream removed, broadcasting: 1\nI0128 11:23:21.671770 1359 log.go:172] (0xc000138790) (0xc000748000) Stream removed, broadcasting: 3\nI0128 11:23:21.671832 1359 log.go:172] (0xc000138790) (0xc0005dd5e0) Stream removed, broadcasting: 5\n" Jan 28 11:23:21.684: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 11:23:21.685: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 11:23:21.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 11:23:22.419: INFO: stderr: "I0128 11:23:21.873766 1381 log.go:172] (0xc0006c60b0) (0xc00058f540) Create stream\nI0128 11:23:21.874076 1381 log.go:172] (0xc0006c60b0) (0xc00058f540) Stream added, broadcasting: 1\nI0128 11:23:21.881511 1381 log.go:172] (0xc0006c60b0) Reply frame received for 1\nI0128 11:23:21.881559 1381 log.go:172] (0xc0006c60b0) (0xc00058f5e0) Create stream\nI0128 11:23:21.881569 1381 log.go:172] (0xc0006c60b0) (0xc00058f5e0) Stream added, broadcasting: 3\nI0128 11:23:21.882451 1381 log.go:172] (0xc0006c60b0) Reply frame received for 3\nI0128 11:23:21.882467 1381 log.go:172] (0xc0006c60b0) (0xc00058f680) Create stream\nI0128 11:23:21.882472 1381 log.go:172] (0xc0006c60b0) (0xc00058f680) Stream added, broadcasting: 5\nI0128 11:23:21.883208 1381 log.go:172] (0xc0006c60b0) Reply frame received for 5\nI0128 11:23:22.016464 1381 log.go:172] (0xc0006c60b0) Data frame received for 3\nI0128 11:23:22.016561 1381 log.go:172] (0xc00058f5e0) (3) Data frame handling\nI0128 11:23:22.016589 1381 log.go:172] (0xc00058f5e0) (3) Data frame sent\nI0128 11:23:22.404103 1381 log.go:172] (0xc0006c60b0) (0xc00058f5e0) Stream removed, broadcasting: 3\nI0128 11:23:22.404897 1381 log.go:172] (0xc0006c60b0) Data frame received for 1\nI0128 11:23:22.404921 1381 log.go:172] (0xc00058f540) (1) Data frame handling\nI0128 11:23:22.404938 1381 log.go:172] (0xc00058f540) (1) Data frame sent\nI0128 11:23:22.404965 1381 log.go:172] (0xc0006c60b0) (0xc00058f680) Stream removed, broadcasting: 5\nI0128 11:23:22.405016 1381 log.go:172] (0xc0006c60b0) (0xc00058f540) Stream removed, broadcasting: 1\nI0128 11:23:22.405043 1381 log.go:172] (0xc0006c60b0) Go away received\nI0128 11:23:22.406283 1381 log.go:172] (0xc0006c60b0) (0xc00058f540) Stream removed, broadcasting: 1\nI0128 11:23:22.406409 1381 log.go:172] (0xc0006c60b0) (0xc00058f5e0) Stream removed, broadcasting: 3\nI0128 11:23:22.406420 1381 log.go:172] (0xc0006c60b0) (0xc00058f680) Stream removed, broadcasting: 5\n" Jan 28 11:23:22.420: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 11:23:22.420: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 11:23:22.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 28 11:23:23.004: INFO: stderr: "I0128 11:23:22.695440 1403 log.go:172] (0xc0005ee2c0) (0xc0006fc640) Create stream\nI0128 11:23:22.695654 1403 log.go:172] (0xc0005ee2c0) (0xc0006fc640) Stream added, broadcasting: 1\nI0128 11:23:22.702341 1403 log.go:172] (0xc0005ee2c0) Reply frame received for 1\nI0128 11:23:22.702412 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2d20) Create stream\nI0128 11:23:22.702432 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2d20) Stream added, broadcasting: 3\nI0128 11:23:22.705448 1403 log.go:172] (0xc0005ee2c0) Reply frame received for 3\nI0128 11:23:22.705492 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2e60) Create stream\nI0128 11:23:22.705518 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2e60) Stream added, broadcasting: 5\nI0128 11:23:22.706923 1403 log.go:172] (0xc0005ee2c0) Reply frame received for 5\nI0128 11:23:22.870567 1403 log.go:172] (0xc0005ee2c0) Data frame received for 3\nI0128 11:23:22.870670 1403 log.go:172] (0xc0001a2d20) (3) Data frame handling\nI0128 11:23:22.870717 1403 log.go:172] (0xc0001a2d20) (3) Data frame sent\nI0128 11:23:22.987945 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2d20) Stream removed, broadcasting: 3\nI0128 11:23:22.988176 1403 log.go:172] (0xc0005ee2c0) Data frame received for 1\nI0128 11:23:22.988315 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2e60) Stream removed, broadcasting: 5\nI0128 11:23:22.988360 1403 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0128 11:23:22.988387 1403 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0128 11:23:22.988405 1403 log.go:172] (0xc0005ee2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0128 11:23:22.988421 1403 log.go:172] (0xc0005ee2c0) Go away received\nI0128 11:23:22.989554 1403 log.go:172] (0xc0005ee2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0128 11:23:22.989953 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2d20) Stream removed, broadcasting: 3\nI0128 11:23:22.989965 1403 log.go:172] (0xc0005ee2c0) (0xc0001a2e60) Stream removed, broadcasting: 5\n" Jan 28 11:23:23.004: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 28 11:23:23.004: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 28 11:23:23.004: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 11:23:23.021: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 28 11:23:33.047: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 11:23:33.047: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 28 11:23:33.047: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 28 11:23:33.140: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:33.140: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:33.140: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:33.140: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:33.140: INFO: Jan 28 11:23:33.140: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:34.170: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:34.170: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:34.170: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:34.170: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:34.170: INFO: Jan 28 11:23:34.170: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:35.246: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:35.246: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:35.247: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:35.247: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:35.247: INFO: Jan 28 11:23:35.247: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:36.505: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:36.505: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:36.506: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:36.506: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:36.506: INFO: Jan 28 11:23:36.506: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:37.528: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:37.528: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:37.529: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:37.529: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:37.529: INFO: Jan 28 11:23:37.529: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:38.727: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:38.727: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:38.727: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:38.727: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:38.728: INFO: Jan 28 11:23:38.728: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:39.873: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:39.873: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:39.874: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:39.874: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:39.874: INFO: Jan 28 11:23:39.874: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:40.976: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:40.977: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:40.977: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:40.977: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:40.977: INFO: Jan 28 11:23:40.977: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:41.989: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:41.990: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:41.990: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:41.990: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:41.990: INFO: Jan 28 11:23:41.990: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 28 11:23:43.003: INFO: POD NODE PHASE GRACE CONDITIONS Jan 28 11:23:43.003: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:38 +0000 UTC }] Jan 28 11:23:43.003: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:22:58 +0000 UTC }] Jan 28 11:23:43.003: INFO: Jan 28 11:23:43.003: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-wmb9g Jan 28 11:23:44.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:23:44.253: INFO: rc: 1 Jan 28 11:23:44.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0021b8780 exit status 1 true [0xc0011ce040 0xc0011ce058 0xc0011ce070] [0xc0011ce040 0xc0011ce058 0xc0011ce070] [0xc0011ce050 0xc0011ce068] [0x935700 0x935700] 0xc00253c780 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 28 11:23:54.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:23:54.469: INFO: rc: 1 Jan 28 11:23:54.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0025578c0 exit status 1 true [0xc00121ebf8 0xc00121ec10 0xc00121ec28] [0xc00121ebf8 0xc00121ec10 0xc00121ec28] [0xc00121ec08 0xc00121ec20] [0x935700 0x935700] 0xc0021ba180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:24:04.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:24:04.747: INFO: rc: 1 Jan 28 11:24:04.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc120 exit status 1 true [0xc00000e208 0xc00000ebd0 0xc00000ece8] [0xc00000e208 0xc00000ebd0 0xc00000ece8] [0xc00000e2b8 0xc00000eca0] [0x935700 0x935700] 0xc001e661e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:24:14.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:24:14.967: INFO: rc: 1 Jan 28 11:24:14.967: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002218120 exit status 1 true [0xc00016e000 0xc000e501e8 0xc000e50270] [0xc00016e000 0xc000e501e8 0xc000e50270] [0xc000e500e8 0xc000e50230] [0x935700 0x935700] 0xc001654240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:24:24.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:24:25.131: INFO: rc: 1 Jan 28 11:24:25.131: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a120 exit status 1 true [0xc000ae8000 0xc000ae8018 0xc000ae8030] [0xc000ae8000 0xc000ae8018 0xc000ae8030] [0xc000ae8010 0xc000ae8028] [0x935700 0x935700] 0xc001c0e780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:24:35.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:24:35.327: INFO: rc: 1 Jan 28 11:24:35.328: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b4a150 exit status 1 true [0xc0016f8000 0xc0016f8018 0xc0016f8030] [0xc0016f8000 0xc0016f8018 0xc0016f8030] [0xc0016f8010 0xc0016f8028] [0x935700 0x935700] 0xc001b08540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:24:45.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:24:45.458: INFO: rc: 1 Jan 28 11:24:45.459: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a240 exit status 1 true [0xc000ae8038 0xc000ae8058 0xc000ae8070] [0xc000ae8038 0xc000ae8058 0xc000ae8070] [0xc000ae8050 0xc000ae8068] [0x935700 0x935700] 0xc001c0ef60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:24:55.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:24:55.613: INFO: rc: 1 Jan 28 11:24:55.614: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022182d0 exit status 1 true [0xc000e502b8 0xc000e503c0 0xc000e50458] [0xc000e502b8 0xc000e503c0 0xc000e50458] [0xc000e50318 0xc000e50438] [0x935700 0x935700] 0xc0016544e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:25:05.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:25:05.816: INFO: rc: 1 Jan 28 11:25:05.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc240 exit status 1 true [0xc00000ed10 0xc00000ed80 0xc00000ee20] [0xc00000ed10 0xc00000ed80 0xc00000ee20] [0xc00000ed58 0xc00000edc8] [0x935700 0x935700] 0xc001e67920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:25:15.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:25:15.999: INFO: rc: 1 Jan 28 11:25:15.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a360 exit status 1 true [0xc000ae8078 0xc000ae8090 0xc000ae80a8] [0xc000ae8078 0xc000ae8090 0xc000ae80a8] [0xc000ae8088 0xc000ae80a0] [0x935700 0x935700] 0xc001c0fb60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:25:26.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:25:26.128: INFO: rc: 1 Jan 28 11:25:26.129: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a480 exit status 1 true [0xc000ae80b0 0xc000ae80c8 0xc000ae80e0] [0xc000ae80b0 0xc000ae80c8 0xc000ae80e0] [0xc000ae80c0 0xc000ae80d8] [0x935700 0x935700] 0xc001cb0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:25:36.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:25:36.239: INFO: rc: 1 Jan 28 11:25:36.239: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a5a0 exit status 1 true [0xc000ae80e8 0xc000ae8100 0xc000ae8118] [0xc000ae80e8 0xc000ae8100 0xc000ae8118] [0xc000ae80f8 0xc000ae8110] [0x935700 0x935700] 0xc001cb1b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:25:46.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:25:46.459: INFO: rc: 1 Jan 28 11:25:46.459: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b4a300 exit status 1 true [0xc0016f8038 0xc0016f8050 0xc0016f8078] [0xc0016f8038 0xc0016f8050 0xc0016f8078] [0xc0016f8048 0xc0016f8070] [0x935700 0x935700] 0xc001b08840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:25:56.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:25:56.672: INFO: rc: 1 Jan 28 11:25:56.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc3f0 exit status 1 true [0xc00000ee38 0xc00000efb0 0xc00000f148] [0xc00000ee38 0xc00000efb0 0xc00000f148] [0xc00000eed8 0xc00000eff8] [0x935700 0x935700] 0xc001e67bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:26:06.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:26:06.931: INFO: rc: 1 Jan 28 11:26:06.931: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014680f0 exit status 1 true [0xc00121e008 0xc00121e020 0xc00121e038] [0xc00121e008 0xc00121e020 0xc00121e038] [0xc00121e018 0xc00121e030] [0x935700 0x935700] 0xc001cac2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:26:16.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:26:17.126: INFO: rc: 1 Jan 28 11:26:17.126: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001468270 exit status 1 true [0xc00016e000 0xc00000e278 0xc00000ebf0] [0xc00016e000 0xc00000e278 0xc00000ebf0] [0xc00000e208 0xc00000ebd0] [0x935700 0x935700] 0xc001c0e780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:26:27.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:26:27.401: INFO: rc: 1 Jan 28 11:26:27.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc180 exit status 1 true [0xc00121e040 0xc00121e058 0xc00121e070] [0xc00121e040 0xc00121e058 0xc00121e070] [0xc00121e050 0xc00121e068] [0x935700 0x935700] 0xc001e661e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:26:37.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:26:37.559: INFO: rc: 1 Jan 28 11:26:37.560: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a150 exit status 1 true [0xc000ae8000 0xc000ae8018 0xc000ae8030] [0xc000ae8000 0xc000ae8018 0xc000ae8030] [0xc000ae8010 0xc000ae8028] [0x935700 0x935700] 0xc001cac960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:26:47.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:26:47.795: INFO: rc: 1 Jan 28 11:26:47.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a2d0 exit status 1 true [0xc000ae8038 0xc000ae8058 0xc000ae8070] [0xc000ae8038 0xc000ae8058 0xc000ae8070] [0xc000ae8050 0xc000ae8068] [0x935700 0x935700] 0xc001cad200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:26:57.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:26:57.994: INFO: rc: 1 Jan 28 11:26:57.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc2d0 exit status 1 true [0xc00121e078 0xc00121e090 0xc00121e0a8] [0xc00121e078 0xc00121e090 0xc00121e0a8] [0xc00121e088 0xc00121e0a0] [0x935700 0x935700] 0xc001e67920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:27:07.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:27:08.126: INFO: rc: 1 Jan 28 11:27:08.126: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b4a180 exit status 1 true [0xc0016f8000 0xc0016f8018 0xc0016f8030] [0xc0016f8000 0xc0016f8018 0xc0016f8030] [0xc0016f8010 0xc0016f8028] [0x935700 0x935700] 0xc001cb02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:27:18.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:27:18.313: INFO: rc: 1 Jan 28 11:27:18.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc450 exit status 1 true [0xc00121e0b0 0xc00121e0c8 0xc00121e0e0] [0xc00121e0b0 0xc00121e0c8 0xc00121e0e0] [0xc00121e0c0 0xc00121e0d8] [0x935700 0x935700] 0xc001e67bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:27:28.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:27:28.516: INFO: rc: 1 Jan 28 11:27:28.517: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c2a4b0 exit status 1 true [0xc000ae8078 0xc000ae8090 0xc000ae80a8] [0xc000ae8078 0xc000ae8090 0xc000ae80a8] [0xc000ae8088 0xc000ae80a0] [0x935700 0x935700] 0xc001cad5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:27:38.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:27:38.705: INFO: rc: 1 Jan 28 11:27:38.705: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc5a0 exit status 1 true [0xc00121e0e8 0xc00121e100 0xc00121e118] [0xc00121e0e8 0xc00121e100 0xc00121e118] [0xc00121e0f8 0xc00121e110] [0x935700 0x935700] 0xc001e67e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:27:48.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:27:48.899: INFO: rc: 1 Jan 28 11:27:48.899: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b4a2d0 exit status 1 true [0xc0016f8038 0xc0016f8050 0xc0016f8078] [0xc0016f8038 0xc0016f8050 0xc0016f8078] [0xc0016f8048 0xc0016f8070] [0x935700 0x935700] 0xc001cb1b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:27:58.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:27:59.093: INFO: rc: 1 Jan 28 11:27:59.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014683c0 exit status 1 true [0xc00000eca0 0xc00000ed40 0xc00000ed98] [0xc00000eca0 0xc00000ed40 0xc00000ed98] [0xc00000ed10 0xc00000ed80] [0x935700 0x935700] 0xc001c0ef60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:28:09.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:28:09.255: INFO: rc: 1 Jan 28 11:28:09.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc120 exit status 1 true [0xc00000e1b8 0xc00000e2b8 0xc00000eca0] [0xc00000e1b8 0xc00000e2b8 0xc00000eca0] [0xc00000e278 0xc00000ebf0] [0x935700 0x935700] 0xc001e661e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:28:19.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:28:19.433: INFO: rc: 1 Jan 28 11:28:19.435: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001468180 exit status 1 true [0xc00121e000 0xc00121e018 0xc00121e030] [0xc00121e000 0xc00121e018 0xc00121e030] [0xc00121e010 0xc00121e028] [0x935700 0x935700] 0xc001c0e780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:28:29.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:28:29.564: INFO: rc: 1 Jan 28 11:28:29.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ecc2a0 exit status 1 true [0xc00000ece8 0xc00000ed58 0xc00000edc8] [0xc00000ece8 0xc00000ed58 0xc00000edc8] [0xc00000ed40 0xc00000ed98] [0x935700 0x935700] 0xc001e67920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:28:39.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:28:39.737: INFO: rc: 1 Jan 28 11:28:39.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b4a150 exit status 1 true [0xc000ae8000 0xc000ae8018 0xc000ae8030] [0xc000ae8000 0xc000ae8018 0xc000ae8030] [0xc000ae8010 0xc000ae8028] [0x935700 0x935700] 0xc001cac2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 28 11:28:49.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wmb9g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 28 11:28:49.959: INFO: rc: 1 Jan 28 11:28:49.960: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 28 11:28:49.960: INFO: Scaling statefulset ss to 0 Jan 28 11:28:49.983: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 28 11:28:49.985: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wmb9g Jan 28 11:28:49.988: INFO: Scaling statefulset ss to 0 Jan 28 11:28:49.996: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 11:28:49.998: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:28:50.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-wmb9g" for this suite. Jan 28 11:28:58.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:28:58.136: INFO: namespace: e2e-tests-statefulset-wmb9g, resource: bindings, ignored listing per whitelist Jan 28 11:28:58.193: INFO: namespace e2e-tests-statefulset-wmb9g deletion completed in 8.168092108s • [SLOW TEST:380.750 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:28:58.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 28 11:28:58.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-cq9zk" to be "success or failure" Jan 28 11:28:58.548: INFO: Pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 126.013346ms Jan 28 11:29:00.578: INFO: Pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156288578s Jan 28 11:29:02.607: INFO: Pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185377491s Jan 28 11:29:04.696: INFO: Pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27432617s Jan 28 11:29:06.898: INFO: Pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476499961s Jan 28 11:29:08.911: INFO: Pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.489445369s STEP: Saw pod success Jan 28 11:29:08.911: INFO: Pod "downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:29:08.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005 container client-container: STEP: delete the pod Jan 28 11:29:09.600: INFO: Waiting for pod downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005 to disappear Jan 28 11:29:09.679: INFO: Pod downwardapi-volume-600b595d-41c1-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:29:09.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cq9zk" for this suite. Jan 28 11:29:15.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:29:15.851: INFO: namespace: e2e-tests-downward-api-cq9zk, resource: bindings, ignored listing per whitelist Jan 28 11:29:15.917: INFO: namespace e2e-tests-downward-api-cq9zk deletion completed in 6.21903728s • [SLOW TEST:17.723 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:29:15.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-9bkc STEP: Creating a pod to test atomic-volume-subpath Jan 28 11:29:16.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9bkc" in namespace "e2e-tests-subpath-zh9f5" to be "success or failure" Jan 28 11:29:16.430: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 106.468202ms Jan 28 11:29:18.457: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1333459s Jan 28 11:29:20.485: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161110113s Jan 28 11:29:22.624: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300555273s Jan 28 11:29:24.639: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315176741s Jan 28 11:29:26.720: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.395961456s Jan 28 11:29:28.733: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.408915933s Jan 28 11:29:30.819: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.495041292s Jan 28 11:29:32.833: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 16.508744613s Jan 28 11:29:34.859: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 18.535471759s Jan 28 11:29:36.880: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 20.556250189s Jan 28 11:29:38.905: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 22.58136547s Jan 28 11:29:40.928: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 24.60451768s Jan 28 11:29:42.947: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 26.623610264s Jan 28 11:29:44.968: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 28.643910681s Jan 28 11:29:46.987: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 30.662740227s Jan 28 11:29:48.999: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Running", Reason="", readiness=false. Elapsed: 32.675136964s Jan 28 11:29:51.039: INFO: Pod "pod-subpath-test-projected-9bkc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.715017745s STEP: Saw pod success Jan 28 11:29:51.039: INFO: Pod "pod-subpath-test-projected-9bkc" satisfied condition "success or failure" Jan 28 11:29:51.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-9bkc container test-container-subpath-projected-9bkc: STEP: delete the pod Jan 28 11:29:51.240: INFO: Waiting for pod pod-subpath-test-projected-9bkc to disappear Jan 28 11:29:51.407: INFO: Pod pod-subpath-test-projected-9bkc no longer exists STEP: Deleting pod pod-subpath-test-projected-9bkc Jan 28 11:29:51.408: INFO: Deleting pod "pod-subpath-test-projected-9bkc" in namespace "e2e-tests-subpath-zh9f5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:29:51.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zh9f5" for this suite. Jan 28 11:29:59.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:29:59.526: INFO: namespace: e2e-tests-subpath-zh9f5, resource: bindings, ignored listing per whitelist Jan 28 11:29:59.657: INFO: namespace e2e-tests-subpath-zh9f5 deletion completed in 8.216219525s • [SLOW TEST:43.740 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:29:59.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 28 11:29:59.961: INFO: Waiting up to 5m0s for pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-rw8lb" to be "success or failure" Jan 28 11:29:59.972: INFO: Pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.28608ms Jan 28 11:30:02.261: INFO: Pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299341108s Jan 28 11:30:04.276: INFO: Pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314626715s Jan 28 11:30:06.324: INFO: Pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362885727s Jan 28 11:30:08.636: INFO: Pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.67448345s Jan 28 11:30:10.879: INFO: Pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.916898566s STEP: Saw pod success Jan 28 11:30:10.879: INFO: Pod "downward-api-84b270a0-41c1-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:30:10.889: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-84b270a0-41c1-11ea-a04a-0242ac110005 container dapi-container: STEP: delete the pod Jan 28 11:30:11.046: INFO: Waiting for pod downward-api-84b270a0-41c1-11ea-a04a-0242ac110005 to disappear Jan 28 11:30:11.060: INFO: Pod downward-api-84b270a0-41c1-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:30:11.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rw8lb" for this suite. Jan 28 11:30:17.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:30:17.156: INFO: namespace: e2e-tests-downward-api-rw8lb, resource: bindings, ignored listing per whitelist Jan 28 11:30:17.304: INFO: namespace e2e-tests-downward-api-rw8lb deletion completed in 6.233116385s • [SLOW TEST:17.647 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:30:17.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0128 11:30:48.234109 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 11:30:48.234: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:30:48.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jvc5h" for this suite. Jan 28 11:30:58.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:30:59.028: INFO: namespace: e2e-tests-gc-jvc5h, resource: bindings, ignored listing per whitelist Jan 28 11:30:59.043: INFO: namespace e2e-tests-gc-jvc5h deletion completed in 10.769193357s • [SLOW TEST:41.739 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:30:59.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 28 11:31:00.693: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-kzbll" to be "success or failure" Jan 28 11:31:00.873: INFO: Pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 179.070026ms Jan 28 11:31:03.296: INFO: Pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.60271244s Jan 28 11:31:05.307: INFO: Pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613216522s Jan 28 11:31:07.593: INFO: Pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.899691357s Jan 28 11:31:09.615: INFO: Pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921485047s Jan 28 11:31:11.633: INFO: Pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.939839687s STEP: Saw pod success Jan 28 11:31:11.634: INFO: Pod "downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:31:11.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005 container client-container: STEP: delete the pod Jan 28 11:31:12.140: INFO: Waiting for pod downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005 to disappear Jan 28 11:31:12.173: INFO: Pod downwardapi-volume-a8d0f751-41c1-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:31:12.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kzbll" for this suite. Jan 28 11:31:18.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:31:18.700: INFO: namespace: e2e-tests-downward-api-kzbll, resource: bindings, ignored listing per whitelist Jan 28 11:31:18.725: INFO: namespace e2e-tests-downward-api-kzbll deletion completed in 6.535976507s • [SLOW TEST:19.682 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:31:18.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-b3d4228b-41c1-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 11:31:18.989: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-jf4x4" to be "success or failure" Jan 28 11:31:18.997: INFO: Pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08223ms Jan 28 11:31:21.287: INFO: Pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29729932s Jan 28 11:31:23.310: INFO: Pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320234721s Jan 28 11:31:25.661: INFO: Pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672009552s Jan 28 11:31:27.999: INFO: Pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009237576s Jan 28 11:31:30.026: INFO: Pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.037002449s STEP: Saw pod success Jan 28 11:31:30.027: INFO: Pod "pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:31:30.033: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 28 11:31:30.199: INFO: Waiting for pod pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005 to disappear Jan 28 11:31:30.232: INFO: Pod pod-projected-configmaps-b3d5196f-41c1-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:31:30.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jf4x4" for this suite. Jan 28 11:31:36.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:31:36.738: INFO: namespace: e2e-tests-projected-jf4x4, resource: bindings, ignored listing per whitelist Jan 28 11:31:36.754: INFO: namespace e2e-tests-projected-jf4x4 deletion completed in 6.50610005s • [SLOW TEST:18.028 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:31:36.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 28 11:31:36.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 28 11:31:37.198: INFO: stderr: "" Jan 28 11:31:37.198: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:31:37.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v9cq5" for this suite. Jan 28 11:31:43.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:31:43.340: INFO: namespace: e2e-tests-kubectl-v9cq5, resource: bindings, ignored listing per whitelist Jan 28 11:31:43.438: INFO: namespace e2e-tests-kubectl-v9cq5 deletion completed in 6.227693277s • [SLOW TEST:6.684 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:31:43.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 28 11:31:43.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-dq5dc" to be "success or failure" Jan 28 11:31:43.744: INFO: Pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.214048ms Jan 28 11:31:45.950: INFO: Pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22090832s Jan 28 11:31:47.973: INFO: Pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244320445s Jan 28 11:31:50.087: INFO: Pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357864749s Jan 28 11:31:52.378: INFO: Pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.649154108s Jan 28 11:31:54.391: INFO: Pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.661871164s STEP: Saw pod success Jan 28 11:31:54.391: INFO: Pod "downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:31:54.402: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005 container client-container: STEP: delete the pod Jan 28 11:31:54.500: INFO: Waiting for pod downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005 to disappear Jan 28 11:31:54.580: INFO: Pod downwardapi-volume-c286a784-41c1-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:31:54.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dq5dc" for this suite. Jan 28 11:32:00.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:32:00.744: INFO: namespace: e2e-tests-downward-api-dq5dc, resource: bindings, ignored listing per whitelist Jan 28 11:32:00.782: INFO: namespace e2e-tests-downward-api-dq5dc deletion completed in 6.19455127s • [SLOW TEST:17.344 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:32:00.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-cce04509-41c1-11ea-a04a-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-cce04509-41c1-11ea-a04a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:33:37.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vjwrm" for this suite. Jan 28 11:34:01.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:34:01.700: INFO: namespace: e2e-tests-configmap-vjwrm, resource: bindings, ignored listing per whitelist Jan 28 11:34:01.756: INFO: namespace e2e-tests-configmap-vjwrm deletion completed in 24.280936953s • [SLOW TEST:120.973 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:34:01.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:34:02.066: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 28 11:34:02.376: INFO: Number of nodes with available pods: 0 Jan 28 11:34:02.376: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:05.116: INFO: Number of nodes with available pods: 0 Jan 28 11:34:05.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:05.403: INFO: Number of nodes with available pods: 0 Jan 28 11:34:05.403: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:06.413: INFO: Number of nodes with available pods: 0 Jan 28 11:34:06.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:07.434: INFO: Number of nodes with available pods: 0 Jan 28 11:34:07.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:09.704: INFO: Number of nodes with available pods: 0 Jan 28 11:34:09.705: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:10.700: INFO: Number of nodes with available pods: 0 Jan 28 11:34:10.700: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:11.401: INFO: Number of nodes with available pods: 0 Jan 28 11:34:11.402: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:12.402: INFO: Number of nodes with available pods: 1 Jan 28 11:34:12.402: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 28 11:34:12.589: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:13.647: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:14.664: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:15.813: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:16.639: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:17.651: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:18.659: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:19.665: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:19.665: INFO: Pod daemon-set-lbwks is not available Jan 28 11:34:20.643: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:20.644: INFO: Pod daemon-set-lbwks is not available Jan 28 11:34:21.640: INFO: Wrong image for pod: daemon-set-lbwks. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 11:34:21.641: INFO: Pod daemon-set-lbwks is not available Jan 28 11:34:22.780: INFO: Pod daemon-set-5dkxz is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 28 11:34:22.934: INFO: Number of nodes with available pods: 0 Jan 28 11:34:22.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:23.997: INFO: Number of nodes with available pods: 0 Jan 28 11:34:23.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:25.018: INFO: Number of nodes with available pods: 0 Jan 28 11:34:25.018: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:25.957: INFO: Number of nodes with available pods: 0 Jan 28 11:34:25.957: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:26.964: INFO: Number of nodes with available pods: 0 Jan 28 11:34:26.964: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:28.048: INFO: Number of nodes with available pods: 0 Jan 28 11:34:28.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:28.993: INFO: Number of nodes with available pods: 0 Jan 28 11:34:28.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:30.186: INFO: Number of nodes with available pods: 0 Jan 28 11:34:30.187: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:30.972: INFO: Number of nodes with available pods: 0 Jan 28 11:34:30.972: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:32.047: INFO: Number of nodes with available pods: 0 Jan 28 11:34:32.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 28 11:34:32.988: INFO: Number of nodes with available pods: 1 Jan 28 11:34:32.988: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mqzbc, will wait for the garbage collector to delete the pods Jan 28 11:34:33.113: INFO: Deleting DaemonSet.extensions daemon-set took: 21.04487ms Jan 28 11:34:33.213: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.272881ms Jan 28 11:34:42.739: INFO: Number of nodes with available pods: 0 Jan 28 11:34:42.739: INFO: Number of running nodes: 0, number of available pods: 0 Jan 28 11:34:42.745: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mqzbc/daemonsets","resourceVersion":"19739603"},"items":null} Jan 28 11:34:42.750: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mqzbc/pods","resourceVersion":"19739603"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:34:42.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mqzbc" for this suite. Jan 28 11:34:48.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:34:49.141: INFO: namespace: e2e-tests-daemonsets-mqzbc, resource: bindings, ignored listing per whitelist Jan 28 11:34:49.171: INFO: namespace e2e-tests-daemonsets-mqzbc deletion completed in 6.360742638s • [SLOW TEST:47.414 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:34:49.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gj5w9 Jan 28 11:34:59.509: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gj5w9 STEP: checking the pod's current state and verifying that restartCount is present Jan 28 11:34:59.579: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:39:01.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gj5w9" for this suite. Jan 28 11:39:07.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:39:07.375: INFO: namespace: e2e-tests-container-probe-gj5w9, resource: bindings, ignored listing per whitelist Jan 28 11:39:07.559: INFO: namespace e2e-tests-container-probe-gj5w9 deletion completed in 6.473695566s • [SLOW TEST:258.388 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:39:07.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 28 11:39:07.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q25wr' Jan 28 11:39:10.427: INFO: stderr: "" Jan 28 11:39:10.427: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 28 11:39:11.459: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:11.460: INFO: Found 0 / 1 Jan 28 11:39:13.786: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:13.786: INFO: Found 0 / 1 Jan 28 11:39:14.640: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:14.641: INFO: Found 0 / 1 Jan 28 11:39:15.460: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:15.461: INFO: Found 0 / 1 Jan 28 11:39:16.452: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:16.452: INFO: Found 0 / 1 Jan 28 11:39:17.443: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:17.443: INFO: Found 0 / 1 Jan 28 11:39:18.462: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:18.463: INFO: Found 0 / 1 Jan 28 11:39:20.198: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:20.198: INFO: Found 0 / 1 Jan 28 11:39:20.595: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:20.596: INFO: Found 0 / 1 Jan 28 11:39:21.481: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:21.481: INFO: Found 0 / 1 Jan 28 11:39:22.457: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:22.457: INFO: Found 0 / 1 Jan 28 11:39:23.445: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:23.446: INFO: Found 0 / 1 Jan 28 11:39:24.441: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:24.441: INFO: Found 1 / 1 Jan 28 11:39:24.441: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 28 11:39:24.446: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:24.446: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 28 11:39:24.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-smz5m --namespace=e2e-tests-kubectl-q25wr -p {"metadata":{"annotations":{"x":"y"}}}' Jan 28 11:39:24.658: INFO: stderr: "" Jan 28 11:39:24.658: INFO: stdout: "pod/redis-master-smz5m patched\n" STEP: checking annotations Jan 28 11:39:24.730: INFO: Selector matched 1 pods for map[app:redis] Jan 28 11:39:24.730: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:39:24.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q25wr" for this suite. Jan 28 11:39:44.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:39:44.994: INFO: namespace: e2e-tests-kubectl-q25wr, resource: bindings, ignored listing per whitelist Jan 28 11:39:45.013: INFO: namespace e2e-tests-kubectl-q25wr deletion completed in 20.276322485s • [SLOW TEST:37.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:39:45.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:39:45.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 28 11:39:45.431: INFO: stderr: "" Jan 28 11:39:45.432: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:39:45.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cpdbt" for this suite. Jan 28 11:39:53.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:39:53.635: INFO: namespace: e2e-tests-kubectl-cpdbt, resource: bindings, ignored listing per whitelist Jan 28 11:39:53.740: INFO: namespace e2e-tests-kubectl-cpdbt deletion completed in 8.238676992s • [SLOW TEST:8.727 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:39:53.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e6dd1aee-41c2-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 28 11:39:54.180: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-wglck" to be "success or failure" Jan 28 11:39:54.281: INFO: Pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 101.350464ms Jan 28 11:39:56.635: INFO: Pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.455407526s Jan 28 11:39:58.700: INFO: Pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520058068s Jan 28 11:40:01.114: INFO: Pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.934411796s Jan 28 11:40:03.137: INFO: Pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.957106029s Jan 28 11:40:05.148: INFO: Pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.967896028s STEP: Saw pod success Jan 28 11:40:05.148: INFO: Pod "pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:40:05.151: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 28 11:40:05.372: INFO: Waiting for pod pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005 to disappear Jan 28 11:40:06.171: INFO: Pod pod-projected-secrets-e6decf92-41c2-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:40:06.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wglck" for this suite. Jan 28 11:40:12.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:40:12.898: INFO: namespace: e2e-tests-projected-wglck, resource: bindings, ignored listing per whitelist Jan 28 11:40:12.927: INFO: namespace e2e-tests-projected-wglck deletion completed in 6.734436992s • [SLOW TEST:19.187 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:40:12.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 28 11:40:23.895: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f246f0ef-41c2-11ea-a04a-0242ac110005" Jan 28 11:40:23.895: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f246f0ef-41c2-11ea-a04a-0242ac110005" in namespace "e2e-tests-pods-f6sz5" to be "terminated due to deadline exceeded" Jan 28 11:40:23.905: INFO: Pod "pod-update-activedeadlineseconds-f246f0ef-41c2-11ea-a04a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.889652ms Jan 28 11:40:26.595: INFO: Pod "pod-update-activedeadlineseconds-f246f0ef-41c2-11ea-a04a-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.699422261s Jan 28 11:40:26.595: INFO: Pod "pod-update-activedeadlineseconds-f246f0ef-41c2-11ea-a04a-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:40:26.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-f6sz5" for this suite. Jan 28 11:40:32.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:40:33.087: INFO: namespace: e2e-tests-pods-f6sz5, resource: bindings, ignored listing per whitelist Jan 28 11:40:33.100: INFO: namespace e2e-tests-pods-f6sz5 deletion completed in 6.475101504s • [SLOW TEST:20.172 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:40:33.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:40:33.271: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:40:43.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-npsvj" for this suite. Jan 28 11:41:37.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:41:37.635: INFO: namespace: e2e-tests-pods-npsvj, resource: bindings, ignored listing per whitelist Jan 28 11:41:37.642: INFO: namespace e2e-tests-pods-npsvj deletion completed in 54.203904669s • [SLOW TEST:64.542 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:41:37.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 28 11:41:39.017: INFO: Pod name wrapped-volume-race-255506fd-41c3-11ea-a04a-0242ac110005: Found 0 pods out of 5 Jan 28 11:41:44.053: INFO: Pod name wrapped-volume-race-255506fd-41c3-11ea-a04a-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-255506fd-41c3-11ea-a04a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7h5r6, will wait for the garbage collector to delete the pods Jan 28 11:43:26.215: INFO: Deleting ReplicationController wrapped-volume-race-255506fd-41c3-11ea-a04a-0242ac110005 took: 30.916691ms Jan 28 11:43:26.917: INFO: Terminating ReplicationController wrapped-volume-race-255506fd-41c3-11ea-a04a-0242ac110005 pods took: 701.263798ms STEP: Creating RC which spawns configmap-volume pods Jan 28 11:44:07.866: INFO: Pod name wrapped-volume-race-7e092611-41c3-11ea-a04a-0242ac110005: Found 0 pods out of 5 Jan 28 11:44:12.919: INFO: Pod name wrapped-volume-race-7e092611-41c3-11ea-a04a-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7e092611-41c3-11ea-a04a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7h5r6, will wait for the garbage collector to delete the pods Jan 28 11:45:55.123: INFO: Deleting ReplicationController wrapped-volume-race-7e092611-41c3-11ea-a04a-0242ac110005 took: 19.156035ms Jan 28 11:45:55.624: INFO: Terminating ReplicationController wrapped-volume-race-7e092611-41c3-11ea-a04a-0242ac110005 pods took: 501.056584ms STEP: Creating RC which spawns configmap-volume pods Jan 28 11:46:44.067: INFO: Pod name wrapped-volume-race-db24a131-41c3-11ea-a04a-0242ac110005: Found 0 pods out of 5 Jan 28 11:46:49.091: INFO: Pod name wrapped-volume-race-db24a131-41c3-11ea-a04a-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-db24a131-41c3-11ea-a04a-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7h5r6, will wait for the garbage collector to delete the pods Jan 28 11:49:25.316: INFO: Deleting ReplicationController wrapped-volume-race-db24a131-41c3-11ea-a04a-0242ac110005 took: 30.553156ms Jan 28 11:49:25.718: INFO: Terminating ReplicationController wrapped-volume-race-db24a131-41c3-11ea-a04a-0242ac110005 pods took: 401.710712ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:50:15.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-7h5r6" for this suite. Jan 28 11:50:23.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:50:23.373: INFO: namespace: e2e-tests-emptydir-wrapper-7h5r6, resource: bindings, ignored listing per whitelist Jan 28 11:50:23.500: INFO: namespace e2e-tests-emptydir-wrapper-7h5r6 deletion completed in 8.227128938s • [SLOW TEST:525.857 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:50:23.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:50:23.798: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 28 11:50:29.074: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 28 11:50:39.102: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 28 11:50:41.134: INFO: Creating deployment "test-rollover-deployment" Jan 28 11:50:41.220: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 28 11:50:43.891: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 28 11:50:44.140: INFO: Ensure that both replica sets have 1 created replica Jan 28 11:50:44.157: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 28 11:50:44.184: INFO: Updating deployment test-rollover-deployment Jan 28 11:50:44.185: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 28 11:50:46.246: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 28 11:50:46.261: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 28 11:50:46.271: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:50:46.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:50:48.671: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:50:48.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:50:50.347: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:50:50.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:50:52.914: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:50:52.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:50:54.507: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:50:54.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:50:56.299: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:50:56.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:50:58.304: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:50:58.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:51:00.303: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:51:00.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:51:02.294: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:51:02.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:51:04.294: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:51:04.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:51:06.296: INFO: all replica sets need to contain the pod-template-hash label Jan 28 11:51:06.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809042, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715809041, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 11:51:08.306: INFO: Jan 28 11:51:08.306: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 28 11:51:08.333: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-s8svr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s8svr/deployments/test-rollover-deployment,UID:68898790-41c4-11ea-a994-fa163e34d433,ResourceVersion:19741411,Generation:2,CreationTimestamp:2020-01-28 11:50:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-28 11:50:42 +0000 UTC 2020-01-28 11:50:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-28 11:51:07 +0000 UTC 2020-01-28 11:50:41 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 28 11:51:08.345: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-s8svr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s8svr/replicasets/test-rollover-deployment-5b8479fdb6,UID:6a58639d-41c4-11ea-a994-fa163e34d433,ResourceVersion:19741401,Generation:2,CreationTimestamp:2020-01-28 11:50:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 68898790-41c4-11ea-a994-fa163e34d433 0xc000b9fa67 0xc000b9fa68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 28 11:51:08.345: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 28 11:51:08.345: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-s8svr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s8svr/replicasets/test-rollover-controller,UID:5e2ea9fd-41c4-11ea-a994-fa163e34d433,ResourceVersion:19741410,Generation:2,CreationTimestamp:2020-01-28 11:50:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 68898790-41c4-11ea-a994-fa163e34d433 0xc000b9f72f 0xc000b9f740}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 28 11:51:08.346: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-s8svr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s8svr/replicasets/test-rollover-deployment-58494b7559,UID:68ba4dbd-41c4-11ea-a994-fa163e34d433,ResourceVersion:19741364,Generation:2,CreationTimestamp:2020-01-28 11:50:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 68898790-41c4-11ea-a994-fa163e34d433 0xc000b9f827 0xc000b9f828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 28 11:51:08.357: INFO: Pod "test-rollover-deployment-5b8479fdb6-rz44n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-rz44n,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-s8svr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s8svr/pods/test-rollover-deployment-5b8479fdb6-rz44n,UID:6ab186cf-41c4-11ea-a994-fa163e34d433,ResourceVersion:19741385,Generation:0,CreationTimestamp:2020-01-28 11:50:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 6a58639d-41c4-11ea-a994-fa163e34d433 0xc001928f37 0xc001928f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgwxg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgwxg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-mgwxg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001929140} {node.kubernetes.io/unreachable Exists NoExecute 0xc001929160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:50:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:50:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:50:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 11:50:44 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-28 11:50:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-28 11:50:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3b4beacec7068853f156d006277f8680c735ff708ad642fda4ca1cbe7cdb762d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:51:08.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-s8svr" for this suite. Jan 28 11:51:18.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:51:19.166: INFO: namespace: e2e-tests-deployment-s8svr, resource: bindings, ignored listing per whitelist Jan 28 11:51:19.211: INFO: namespace e2e-tests-deployment-s8svr deletion completed in 10.840836497s • [SLOW TEST:55.711 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:51:19.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qntrx Jan 28 11:51:29.494: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qntrx STEP: checking the pod's current state and verifying that restartCount is present Jan 28 11:51:29.502: INFO: Initial restart count of pod liveness-http is 0 Jan 28 11:51:49.732: INFO: Restart count of pod e2e-tests-container-probe-qntrx/liveness-http is now 1 (20.230713215s elapsed) Jan 28 11:52:08.520: INFO: Restart count of pod e2e-tests-container-probe-qntrx/liveness-http is now 2 (39.018220456s elapsed) Jan 28 11:52:28.855: INFO: Restart count of pod e2e-tests-container-probe-qntrx/liveness-http is now 3 (59.352907314s elapsed) Jan 28 11:52:49.202: INFO: Restart count of pod e2e-tests-container-probe-qntrx/liveness-http is now 4 (1m19.700623527s elapsed) Jan 28 11:53:52.661: INFO: Restart count of pod e2e-tests-container-probe-qntrx/liveness-http is now 5 (2m23.159476202s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:53:52.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qntrx" for this suite. Jan 28 11:53:58.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:53:59.063: INFO: namespace: e2e-tests-container-probe-qntrx, resource: bindings, ignored listing per whitelist Jan 28 11:53:59.155: INFO: namespace e2e-tests-container-probe-qntrx deletion completed in 6.346993879s • [SLOW TEST:159.944 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:53:59.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 28 11:53:59.502: INFO: Waiting up to 5m0s for pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-429w5" to be "success or failure" Jan 28 11:53:59.555: INFO: Pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.941451ms Jan 28 11:54:02.029: INFO: Pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.526654894s Jan 28 11:54:04.044: INFO: Pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542341864s Jan 28 11:54:06.253: INFO: Pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750588281s Jan 28 11:54:08.265: INFO: Pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.763349532s Jan 28 11:54:10.281: INFO: Pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.779245852s STEP: Saw pod success Jan 28 11:54:10.281: INFO: Pod "downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:54:10.285: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005 container dapi-container: STEP: delete the pod Jan 28 11:54:10.549: INFO: Waiting for pod downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005 to disappear Jan 28 11:54:10.694: INFO: Pod downward-api-deb37bfe-41c4-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:54:10.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-429w5" for this suite. Jan 28 11:54:17.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:54:17.744: INFO: namespace: e2e-tests-downward-api-429w5, resource: bindings, ignored listing per whitelist Jan 28 11:54:18.107: INFO: namespace e2e-tests-downward-api-429w5 deletion completed in 7.395578378s • [SLOW TEST:18.952 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:54:18.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 11:54:18.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-qlgvb' Jan 28 11:54:20.288: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 11:54:20.288: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 28 11:54:22.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-qlgvb' Jan 28 11:54:22.779: INFO: stderr: "" Jan 28 11:54:22.780: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:54:22.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qlgvb" for this suite. Jan 28 11:54:29.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:54:29.220: INFO: namespace: e2e-tests-kubectl-qlgvb, resource: bindings, ignored listing per whitelist Jan 28 11:54:29.249: INFO: namespace e2e-tests-kubectl-qlgvb deletion completed in 6.267305582s • [SLOW TEST:11.141 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:54:29.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-f0aa661b-41c4-11ea-a04a-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-f0aa6745-41c4-11ea-a04a-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f0aa661b-41c4-11ea-a04a-0242ac110005 STEP: Updating configmap cm-test-opt-upd-f0aa6745-41c4-11ea-a04a-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-f0aa6789-41c4-11ea-a04a-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:54:48.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x7hts" for this suite. Jan 28 11:55:12.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:55:12.160: INFO: namespace: e2e-tests-projected-x7hts, resource: bindings, ignored listing per whitelist Jan 28 11:55:12.275: INFO: namespace e2e-tests-projected-x7hts deletion completed in 24.196719188s • [SLOW TEST:43.026 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:55:12.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-0a6ddcdb-41c5-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 28 11:55:12.785: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-mxvdl" to be "success or failure" Jan 28 11:55:12.813: INFO: Pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.464306ms Jan 28 11:55:14.839: INFO: Pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053771225s Jan 28 11:55:16.875: INFO: Pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089342937s Jan 28 11:55:19.047: INFO: Pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261880028s Jan 28 11:55:21.069: INFO: Pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283134057s Jan 28 11:55:23.082: INFO: Pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.296956056s STEP: Saw pod success Jan 28 11:55:23.082: INFO: Pod "pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:55:23.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 28 11:55:23.229: INFO: Waiting for pod pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005 to disappear Jan 28 11:55:23.329: INFO: Pod pod-projected-secrets-0a6f8e6c-41c5-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:55:23.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mxvdl" for this suite. Jan 28 11:55:29.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:55:29.573: INFO: namespace: e2e-tests-projected-mxvdl, resource: bindings, ignored listing per whitelist Jan 28 11:55:29.823: INFO: namespace e2e-tests-projected-mxvdl deletion completed in 6.482358248s • [SLOW TEST:17.548 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:55:29.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 28 11:55:30.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:30.541: INFO: stderr: "" Jan 28 11:55:30.541: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 11:55:30.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:30.746: INFO: stderr: "" Jan 28 11:55:30.746: INFO: stdout: "update-demo-nautilus-2qmhs update-demo-nautilus-g2nzw " Jan 28 11:55:30.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmhs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:30.973: INFO: stderr: "" Jan 28 11:55:30.973: INFO: stdout: "" Jan 28 11:55:30.973: INFO: update-demo-nautilus-2qmhs is created but not running Jan 28 11:55:35.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:36.136: INFO: stderr: "" Jan 28 11:55:36.136: INFO: stdout: "update-demo-nautilus-2qmhs update-demo-nautilus-g2nzw " Jan 28 11:55:36.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmhs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:36.266: INFO: stderr: "" Jan 28 11:55:36.266: INFO: stdout: "" Jan 28 11:55:36.266: INFO: update-demo-nautilus-2qmhs is created but not running Jan 28 11:55:41.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:41.403: INFO: stderr: "" Jan 28 11:55:41.403: INFO: stdout: "update-demo-nautilus-2qmhs update-demo-nautilus-g2nzw " Jan 28 11:55:41.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmhs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:41.522: INFO: stderr: "" Jan 28 11:55:41.523: INFO: stdout: "" Jan 28 11:55:41.523: INFO: update-demo-nautilus-2qmhs is created but not running Jan 28 11:55:46.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:46.691: INFO: stderr: "" Jan 28 11:55:46.691: INFO: stdout: "update-demo-nautilus-2qmhs update-demo-nautilus-g2nzw " Jan 28 11:55:46.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmhs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:46.809: INFO: stderr: "" Jan 28 11:55:46.809: INFO: stdout: "true" Jan 28 11:55:46.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmhs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:46.941: INFO: stderr: "" Jan 28 11:55:46.941: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 11:55:46.941: INFO: validating pod update-demo-nautilus-2qmhs Jan 28 11:55:46.959: INFO: got data: { "image": "nautilus.jpg" } Jan 28 11:55:46.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 11:55:46.959: INFO: update-demo-nautilus-2qmhs is verified up and running Jan 28 11:55:46.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:47.085: INFO: stderr: "" Jan 28 11:55:47.085: INFO: stdout: "true" Jan 28 11:55:47.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:47.201: INFO: stderr: "" Jan 28 11:55:47.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 11:55:47.201: INFO: validating pod update-demo-nautilus-g2nzw Jan 28 11:55:47.211: INFO: got data: { "image": "nautilus.jpg" } Jan 28 11:55:47.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 11:55:47.211: INFO: update-demo-nautilus-g2nzw is verified up and running STEP: scaling down the replication controller Jan 28 11:55:47.214: INFO: scanned /root for discovery docs: Jan 28 11:55:47.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:48.468: INFO: stderr: "" Jan 28 11:55:48.469: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 11:55:48.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:48.745: INFO: stderr: "" Jan 28 11:55:48.745: INFO: stdout: "update-demo-nautilus-2qmhs update-demo-nautilus-g2nzw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 28 11:55:53.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:53.941: INFO: stderr: "" Jan 28 11:55:53.942: INFO: stdout: "update-demo-nautilus-2qmhs update-demo-nautilus-g2nzw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 28 11:55:58.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:55:59.137: INFO: stderr: "" Jan 28 11:55:59.138: INFO: stdout: "update-demo-nautilus-2qmhs update-demo-nautilus-g2nzw " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 28 11:56:04.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:04.340: INFO: stderr: "" Jan 28 11:56:04.341: INFO: stdout: "update-demo-nautilus-g2nzw " Jan 28 11:56:04.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:04.545: INFO: stderr: "" Jan 28 11:56:04.545: INFO: stdout: "true" Jan 28 11:56:04.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:04.687: INFO: stderr: "" Jan 28 11:56:04.687: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 11:56:04.687: INFO: validating pod update-demo-nautilus-g2nzw Jan 28 11:56:04.706: INFO: got data: { "image": "nautilus.jpg" } Jan 28 11:56:04.706: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 11:56:04.706: INFO: update-demo-nautilus-g2nzw is verified up and running STEP: scaling up the replication controller Jan 28 11:56:04.710: INFO: scanned /root for discovery docs: Jan 28 11:56:04.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:05.972: INFO: stderr: "" Jan 28 11:56:05.972: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 11:56:05.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:06.118: INFO: stderr: "" Jan 28 11:56:06.118: INFO: stdout: "update-demo-nautilus-g2nzw update-demo-nautilus-nf6hm " Jan 28 11:56:06.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:06.301: INFO: stderr: "" Jan 28 11:56:06.301: INFO: stdout: "true" Jan 28 11:56:06.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:07.181: INFO: stderr: "" Jan 28 11:56:07.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 11:56:07.181: INFO: validating pod update-demo-nautilus-g2nzw Jan 28 11:56:07.225: INFO: got data: { "image": "nautilus.jpg" } Jan 28 11:56:07.225: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 11:56:07.225: INFO: update-demo-nautilus-g2nzw is verified up and running Jan 28 11:56:07.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf6hm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:07.574: INFO: stderr: "" Jan 28 11:56:07.574: INFO: stdout: "" Jan 28 11:56:07.574: INFO: update-demo-nautilus-nf6hm is created but not running Jan 28 11:56:12.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:12.961: INFO: stderr: "" Jan 28 11:56:12.961: INFO: stdout: "update-demo-nautilus-g2nzw update-demo-nautilus-nf6hm " Jan 28 11:56:12.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:13.061: INFO: stderr: "" Jan 28 11:56:13.061: INFO: stdout: "true" Jan 28 11:56:13.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:13.203: INFO: stderr: "" Jan 28 11:56:13.203: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 11:56:13.203: INFO: validating pod update-demo-nautilus-g2nzw Jan 28 11:56:13.221: INFO: got data: { "image": "nautilus.jpg" } Jan 28 11:56:13.221: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 11:56:13.221: INFO: update-demo-nautilus-g2nzw is verified up and running Jan 28 11:56:13.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf6hm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:13.336: INFO: stderr: "" Jan 28 11:56:13.336: INFO: stdout: "" Jan 28 11:56:13.336: INFO: update-demo-nautilus-nf6hm is created but not running Jan 28 11:56:18.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:18.552: INFO: stderr: "" Jan 28 11:56:18.553: INFO: stdout: "update-demo-nautilus-g2nzw update-demo-nautilus-nf6hm " Jan 28 11:56:18.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:18.733: INFO: stderr: "" Jan 28 11:56:18.733: INFO: stdout: "true" Jan 28 11:56:18.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g2nzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:18.887: INFO: stderr: "" Jan 28 11:56:18.888: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 11:56:18.888: INFO: validating pod update-demo-nautilus-g2nzw Jan 28 11:56:18.902: INFO: got data: { "image": "nautilus.jpg" } Jan 28 11:56:18.902: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 11:56:18.902: INFO: update-demo-nautilus-g2nzw is verified up and running Jan 28 11:56:18.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf6hm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:19.041: INFO: stderr: "" Jan 28 11:56:19.041: INFO: stdout: "true" Jan 28 11:56:19.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf6hm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:19.164: INFO: stderr: "" Jan 28 11:56:19.164: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 11:56:19.164: INFO: validating pod update-demo-nautilus-nf6hm Jan 28 11:56:19.176: INFO: got data: { "image": "nautilus.jpg" } Jan 28 11:56:19.177: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 11:56:19.177: INFO: update-demo-nautilus-nf6hm is verified up and running STEP: using delete to clean up resources Jan 28 11:56:19.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:19.340: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 11:56:19.340: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 28 11:56:19.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-z2r9b' Jan 28 11:56:19.470: INFO: stderr: "No resources found.\n" Jan 28 11:56:19.470: INFO: stdout: "" Jan 28 11:56:19.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-z2r9b -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 11:56:19.656: INFO: stderr: "" Jan 28 11:56:19.657: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:56:19.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z2r9b" for this suite. Jan 28 11:56:43.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:56:44.087: INFO: namespace: e2e-tests-kubectl-z2r9b, resource: bindings, ignored listing per whitelist Jan 28 11:56:44.174: INFO: namespace e2e-tests-kubectl-z2r9b deletion completed in 24.490455747s • [SLOW TEST:74.350 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:56:44.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 28 11:56:44.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8cv4t' Jan 28 11:56:45.266: INFO: stderr: "" Jan 28 11:56:45.266: INFO: stdout: "pod/pause created\n" Jan 28 11:56:45.266: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 28 11:56:45.266: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8cv4t" to be "running and ready" Jan 28 11:56:45.367: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 100.510083ms Jan 28 11:56:47.548: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282107837s Jan 28 11:56:49.564: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297794599s Jan 28 11:56:51.667: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401140445s Jan 28 11:56:53.678: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.41188633s Jan 28 11:56:55.696: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.429418975s Jan 28 11:56:55.696: INFO: Pod "pause" satisfied condition "running and ready" Jan 28 11:56:55.696: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 28 11:56:55.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8cv4t' Jan 28 11:56:55.929: INFO: stderr: "" Jan 28 11:56:55.930: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 28 11:56:55.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8cv4t' Jan 28 11:56:56.067: INFO: stderr: "" Jan 28 11:56:56.067: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 28 11:56:56.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8cv4t' Jan 28 11:56:56.196: INFO: stderr: "" Jan 28 11:56:56.196: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 28 11:56:56.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8cv4t' Jan 28 11:56:56.359: INFO: stderr: "" Jan 28 11:56:56.359: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 28 11:56:56.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8cv4t' Jan 28 11:56:56.611: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 11:56:56.612: INFO: stdout: "pod \"pause\" force deleted\n" Jan 28 11:56:56.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8cv4t' Jan 28 11:56:56.806: INFO: stderr: "No resources found.\n" Jan 28 11:56:56.807: INFO: stdout: "" Jan 28 11:56:56.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8cv4t -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 11:56:56.942: INFO: stderr: "" Jan 28 11:56:56.942: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:56:56.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8cv4t" for this suite. Jan 28 11:57:03.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:57:03.578: INFO: namespace: e2e-tests-kubectl-8cv4t, resource: bindings, ignored listing per whitelist Jan 28 11:57:03.811: INFO: namespace e2e-tests-kubectl-8cv4t deletion completed in 6.855206611s • [SLOW TEST:19.637 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:57:03.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 28 11:57:04.126: INFO: Waiting up to 5m0s for pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-42gbd" to be "success or failure" Jan 28 11:57:04.131: INFO: Pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.854793ms Jan 28 11:57:06.157: INFO: Pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030954386s Jan 28 11:57:08.172: INFO: Pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045947906s Jan 28 11:57:10.232: INFO: Pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105971885s Jan 28 11:57:12.724: INFO: Pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.597639477s Jan 28 11:57:14.738: INFO: Pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.611661969s STEP: Saw pod success Jan 28 11:57:14.738: INFO: Pod "downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:57:14.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005 container dapi-container: STEP: delete the pod Jan 28 11:57:16.047: INFO: Waiting for pod downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005 to disappear Jan 28 11:57:16.225: INFO: Pod downward-api-4ccc902f-41c5-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:57:16.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-42gbd" for this suite. Jan 28 11:57:22.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:57:22.371: INFO: namespace: e2e-tests-downward-api-42gbd, resource: bindings, ignored listing per whitelist Jan 28 11:57:22.618: INFO: namespace e2e-tests-downward-api-42gbd deletion completed in 6.37545857s • [SLOW TEST:18.806 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:57:22.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-57f23e85-41c5-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 28 11:57:22.849: INFO: Waiting up to 5m0s for pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-462fw" to be "success or failure" Jan 28 11:57:22.872: INFO: Pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.350239ms Jan 28 11:57:25.109: INFO: Pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260178661s Jan 28 11:57:27.124: INFO: Pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27504152s Jan 28 11:57:29.180: INFO: Pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330936567s Jan 28 11:57:31.257: INFO: Pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.408298384s Jan 28 11:57:33.273: INFO: Pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.423818369s STEP: Saw pod success Jan 28 11:57:33.273: INFO: Pod "pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:57:33.286: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 28 11:57:34.112: INFO: Waiting for pod pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005 to disappear Jan 28 11:57:34.414: INFO: Pod pod-configmaps-57f31b74-41c5-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:57:34.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-462fw" for this suite. Jan 28 11:57:40.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:57:40.763: INFO: namespace: e2e-tests-configmap-462fw, resource: bindings, ignored listing per whitelist Jan 28 11:57:40.777: INFO: namespace e2e-tests-configmap-462fw deletion completed in 6.346775654s • [SLOW TEST:18.158 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:57:40.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 11:57:41.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-sl77r' Jan 28 11:57:41.262: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 11:57:41.262: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 28 11:57:45.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-sl77r' Jan 28 11:57:45.649: INFO: stderr: "" Jan 28 11:57:45.650: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:57:45.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sl77r" for this suite. Jan 28 11:57:51.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:57:51.962: INFO: namespace: e2e-tests-kubectl-sl77r, resource: bindings, ignored listing per whitelist Jan 28 11:57:52.021: INFO: namespace e2e-tests-kubectl-sl77r deletion completed in 6.350111333s • [SLOW TEST:11.244 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:57:52.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:57:52.313: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:58:02.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bzcsb" for this suite. Jan 28 11:58:57.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:58:57.241: INFO: namespace: e2e-tests-pods-bzcsb, resource: bindings, ignored listing per whitelist Jan 28 11:58:57.300: INFO: namespace e2e-tests-pods-bzcsb deletion completed in 54.305480336s • [SLOW TEST:65.279 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:58:57.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0128 11:59:11.189212 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 11:59:11.189: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:59:11.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-clq77" for this suite. Jan 28 11:59:29.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:59:30.695: INFO: namespace: e2e-tests-gc-clq77, resource: bindings, ignored listing per whitelist Jan 28 11:59:30.719: INFO: namespace e2e-tests-gc-clq77 deletion completed in 19.522861628s • [SLOW TEST:33.418 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:59:30.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a58652c9-41c5-11ea-a04a-0242ac110005 STEP: Creating a pod to test consume secrets Jan 28 11:59:33.503: INFO: Waiting up to 5m0s for pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-lm7vb" to be "success or failure" Jan 28 11:59:33.895: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 391.716089ms Jan 28 11:59:35.909: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40596483s Jan 28 11:59:38.039: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.536315596s Jan 28 11:59:40.059: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.555930274s Jan 28 11:59:43.183: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.68033625s Jan 28 11:59:45.211: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.707945776s Jan 28 11:59:47.394: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.891662823s Jan 28 11:59:49.411: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.907800359s STEP: Saw pod success Jan 28 11:59:49.411: INFO: Pod "pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure" Jan 28 11:59:49.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 28 11:59:49.588: INFO: Waiting for pod pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005 to disappear Jan 28 11:59:49.605: INFO: Pod pod-secrets-a5d17209-41c5-11ea-a04a-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 28 11:59:49.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lm7vb" for this suite. Jan 28 11:59:55.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 11:59:55.812: INFO: namespace: e2e-tests-secrets-lm7vb, resource: bindings, ignored listing per whitelist Jan 28 11:59:56.005: INFO: namespace e2e-tests-secrets-lm7vb deletion completed in 6.387241787s • [SLOW TEST:25.285 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 28 11:59:56.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 28 11:59:56.223: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.160414ms)
Jan 28 11:59:56.228: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.639406ms)
Jan 28 11:59:56.233: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.256104ms)
Jan 28 11:59:56.243: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.202611ms)
Jan 28 11:59:56.258: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.971153ms)
Jan 28 11:59:56.285: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.535081ms)
Jan 28 11:59:56.309: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.866973ms)
Jan 28 11:59:56.333: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.550011ms)
Jan 28 11:59:56.349: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.302397ms)
Jan 28 11:59:56.360: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.177778ms)
Jan 28 11:59:56.380: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.477546ms)
Jan 28 11:59:56.398: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.120453ms)
Jan 28 11:59:56.421: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.91671ms)
Jan 28 11:59:56.433: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.105211ms)
Jan 28 11:59:56.451: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.813688ms)
Jan 28 11:59:56.458: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.050375ms)
Jan 28 11:59:56.463: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.04471ms)
Jan 28 11:59:56.471: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.197328ms)
Jan 28 11:59:56.476: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.673782ms)
Jan 28 11:59:56.486: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.444004ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 11:59:56.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kdkp4" for this suite.
Jan 28 12:00:02.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:00:02.659: INFO: namespace: e2e-tests-proxy-kdkp4, resource: bindings, ignored listing per whitelist
Jan 28 12:00:02.712: INFO: namespace e2e-tests-proxy-kdkp4 deletion completed in 6.218338448s

• [SLOW TEST:6.706 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:00:02.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 28 12:00:02.832: INFO: Waiting up to 5m0s for pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-var-expansion-j74wb" to be "success or failure"
Jan 28 12:00:02.844: INFO: Pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.682593ms
Jan 28 12:00:04.878: INFO: Pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046099374s
Jan 28 12:00:06.925: INFO: Pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092350746s
Jan 28 12:00:09.038: INFO: Pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205313627s
Jan 28 12:00:11.060: INFO: Pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227583035s
Jan 28 12:00:13.077: INFO: Pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244690676s
STEP: Saw pod success
Jan 28 12:00:13.077: INFO: Pod "var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:00:13.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 28 12:00:13.250: INFO: Waiting for pod var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005 to disappear
Jan 28 12:00:13.264: INFO: Pod var-expansion-b750cb90-41c5-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:00:13.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-j74wb" for this suite.
Jan 28 12:00:19.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:00:19.552: INFO: namespace: e2e-tests-var-expansion-j74wb, resource: bindings, ignored listing per whitelist
Jan 28 12:00:19.767: INFO: namespace e2e-tests-var-expansion-j74wb deletion completed in 6.488079332s

• [SLOW TEST:17.055 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:00:19.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:00:20.102: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-nh9mt" to be "success or failure"
Jan 28 12:00:20.111: INFO: Pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722606ms
Jan 28 12:00:22.130: INFO: Pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028119374s
Jan 28 12:00:24.142: INFO: Pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039988224s
Jan 28 12:00:26.346: INFO: Pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244127605s
Jan 28 12:00:28.361: INFO: Pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258339369s
Jan 28 12:00:30.396: INFO: Pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.293267564s
STEP: Saw pod success
Jan 28 12:00:30.396: INFO: Pod "downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:00:30.418: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:00:31.038: INFO: Waiting for pod downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005 to disappear
Jan 28 12:00:31.056: INFO: Pod downwardapi-volume-c19ccb4b-41c5-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:00:31.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nh9mt" for this suite.
Jan 28 12:00:37.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:00:37.147: INFO: namespace: e2e-tests-projected-nh9mt, resource: bindings, ignored listing per whitelist
Jan 28 12:00:37.225: INFO: namespace e2e-tests-projected-nh9mt deletion completed in 6.15887253s

• [SLOW TEST:17.458 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:00:37.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-cc0192a5-41c5-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 28 12:00:37.704: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-bf59b" to be "success or failure"
Jan 28 12:00:37.771: INFO: Pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 66.717105ms
Jan 28 12:00:39.994: INFO: Pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289756782s
Jan 28 12:00:42.033: INFO: Pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328474509s
Jan 28 12:00:44.047: INFO: Pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343173782s
Jan 28 12:00:46.068: INFO: Pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363517844s
Jan 28 12:00:48.082: INFO: Pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378326117s
STEP: Saw pod success
Jan 28 12:00:48.082: INFO: Pod "pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:00:48.089: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 28 12:00:48.399: INFO: Waiting for pod pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005 to disappear
Jan 28 12:00:49.507: INFO: Pod pod-configmaps-cc03627d-41c5-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:00:49.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bf59b" for this suite.
Jan 28 12:00:56.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:00:56.082: INFO: namespace: e2e-tests-configmap-bf59b, resource: bindings, ignored listing per whitelist
Jan 28 12:00:56.321: INFO: namespace e2e-tests-configmap-bf59b deletion completed in 6.791135362s

• [SLOW TEST:19.096 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:00:56.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:00:56.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-nj668" to be "success or failure"
Jan 28 12:00:56.654: INFO: Pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.47759ms
Jan 28 12:00:58.746: INFO: Pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10245969s
Jan 28 12:01:00.772: INFO: Pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128268394s
Jan 28 12:01:02.941: INFO: Pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.297064262s
Jan 28 12:01:04.952: INFO: Pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307693118s
Jan 28 12:01:06.968: INFO: Pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324221406s
STEP: Saw pod success
Jan 28 12:01:06.968: INFO: Pod "downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:01:06.974: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:01:07.185: INFO: Waiting for pod downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005 to disappear
Jan 28 12:01:07.195: INFO: Pod downwardapi-volume-d75bb9fc-41c5-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:01:07.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nj668" for this suite.
Jan 28 12:01:13.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:01:13.376: INFO: namespace: e2e-tests-downward-api-nj668, resource: bindings, ignored listing per whitelist
Jan 28 12:01:13.484: INFO: namespace e2e-tests-downward-api-nj668 deletion completed in 6.280635283s

• [SLOW TEST:17.162 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:01:13.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 28 12:01:13.790: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 12:01:13.883: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 12:01:13.902: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 28 12:01:13.939: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 28 12:01:13.939: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 12:01:13.939: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 28 12:01:13.939: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 28 12:01:13.939: INFO: 	Container weave ready: true, restart count 0
Jan 28 12:01:13.939: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 12:01:13.939: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 28 12:01:13.939: INFO: 	Container coredns ready: true, restart count 0
Jan 28 12:01:13.939: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 28 12:01:13.939: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 28 12:01:13.939: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 28 12:01:13.939: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 28 12:01:13.939: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ee0b3c902ba020], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:01:15.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-9zq6b" for this suite.
Jan 28 12:01:21.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:01:21.515: INFO: namespace: e2e-tests-sched-pred-9zq6b, resource: bindings, ignored listing per whitelist
Jan 28 12:01:21.574: INFO: namespace e2e-tests-sched-pred-9zq6b deletion completed in 6.539601008s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:8.088 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:01:21.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-e6666d8b-41c5-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:01:21.828: INFO: Waiting up to 5m0s for pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-s822z" to be "success or failure"
Jan 28 12:01:21.845: INFO: Pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.419221ms
Jan 28 12:01:23.870: INFO: Pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041415123s
Jan 28 12:01:25.910: INFO: Pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082017395s
Jan 28 12:01:27.950: INFO: Pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122219044s
Jan 28 12:01:29.976: INFO: Pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147859703s
Jan 28 12:01:31.993: INFO: Pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164558097s
STEP: Saw pod success
Jan 28 12:01:31.993: INFO: Pod "pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:01:32.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 28 12:01:32.646: INFO: Waiting for pod pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005 to disappear
Jan 28 12:01:32.717: INFO: Pod pod-secrets-e66750e1-41c5-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:01:32.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-s822z" for this suite.
Jan 28 12:01:38.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:01:38.919: INFO: namespace: e2e-tests-secrets-s822z, resource: bindings, ignored listing per whitelist
Jan 28 12:01:39.182: INFO: namespace e2e-tests-secrets-s822z deletion completed in 6.458655426s

• [SLOW TEST:17.606 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:01:39.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-xq77l in namespace e2e-tests-proxy-5vbq5
I0128 12:01:39.452596       8 runners.go:184] Created replication controller with name: proxy-service-xq77l, namespace: e2e-tests-proxy-5vbq5, replica count: 1
I0128 12:01:40.504319       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:41.504953       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:42.505764       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:43.506599       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:44.507462       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:45.508139       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:46.509001       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:47.509707       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:01:48.510749       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 12:01:49.512430       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 12:01:50.513707       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 12:01:51.514534       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 12:01:52.515474       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 12:01:53.516354       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 12:01:54.517186       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 12:01:55.518181       8 runners.go:184] proxy-service-xq77l Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 12:01:55.543: INFO: setup took 16.220515932s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 28 12:01:55.590: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5vbq5/pods/http:proxy-service-xq77l-g5r9j:162/proxy/: bar (200; 45.125777ms)
Jan 28 12:01:55.590: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-5vbq5/pods/proxy-service-xq77l-g5r9j:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:02:11.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-clzz9" to be "success or failure"
Jan 28 12:02:11.666: INFO: Pod "downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.517128ms
Jan 28 12:02:13.771: INFO: Pod "downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115014665s
Jan 28 12:02:15.786: INFO: Pod "downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130005909s
Jan 28 12:02:18.325: INFO: Pod "downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.669439077s
Jan 28 12:02:20.351: INFO: Pod "downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.695799026s
STEP: Saw pod success
Jan 28 12:02:20.352: INFO: Pod "downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:02:20.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:02:20.958: INFO: Waiting for pod downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:02:21.018: INFO: Pod downwardapi-volume-041a8746-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:02:21.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-clzz9" for this suite.
Jan 28 12:02:27.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:02:27.443: INFO: namespace: e2e-tests-downward-api-clzz9, resource: bindings, ignored listing per whitelist
Jan 28 12:02:27.521: INFO: namespace e2e-tests-downward-api-clzz9 deletion completed in 6.483674677s

• [SLOW TEST:16.066 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:02:27.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-0db8acbc-41c6-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 28 12:02:27.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-cz54p" to be "success or failure"
Jan 28 12:02:27.861: INFO: Pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.04456ms
Jan 28 12:02:29.889: INFO: Pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049592039s
Jan 28 12:02:31.914: INFO: Pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074867991s
Jan 28 12:02:33.947: INFO: Pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107932132s
Jan 28 12:02:35.974: INFO: Pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13455489s
Jan 28 12:02:37.999: INFO: Pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.160023303s
STEP: Saw pod success
Jan 28 12:02:38.000: INFO: Pod "pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:02:38.011: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 28 12:02:39.222: INFO: Waiting for pod pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:02:39.240: INFO: Pod pod-configmaps-0dba852f-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:02:39.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cz54p" for this suite.
Jan 28 12:02:45.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:02:45.450: INFO: namespace: e2e-tests-configmap-cz54p, resource: bindings, ignored listing per whitelist
Jan 28 12:02:45.586: INFO: namespace e2e-tests-configmap-cz54p deletion completed in 6.321462865s

• [SLOW TEST:18.065 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:02:45.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 28 12:02:45.822: INFO: Waiting up to 5m0s for pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-var-expansion-rhh2d" to be "success or failure"
Jan 28 12:02:45.831: INFO: Pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.849771ms
Jan 28 12:02:47.991: INFO: Pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168044642s
Jan 28 12:02:50.103: INFO: Pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280090046s
Jan 28 12:02:52.839: INFO: Pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.016445369s
Jan 28 12:02:54.865: INFO: Pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.042563074s
Jan 28 12:02:56.880: INFO: Pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.057469558s
STEP: Saw pod success
Jan 28 12:02:56.880: INFO: Pod "var-expansion-1878a628-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:02:56.884: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-1878a628-41c6-11ea-a04a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 28 12:02:57.828: INFO: Waiting for pod var-expansion-1878a628-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:02:57.847: INFO: Pod var-expansion-1878a628-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:02:57.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-rhh2d" for this suite.
Jan 28 12:03:03.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:03:04.173: INFO: namespace: e2e-tests-var-expansion-rhh2d, resource: bindings, ignored listing per whitelist
Jan 28 12:03:04.197: INFO: namespace e2e-tests-var-expansion-rhh2d deletion completed in 6.337916249s

• [SLOW TEST:18.611 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:03:04.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 28 12:03:14.974: INFO: Successfully updated pod "annotationupdate23896c52-41c6-11ea-a04a-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:03:19.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4b784" for this suite.
Jan 28 12:03:43.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:03:43.421: INFO: namespace: e2e-tests-projected-4b784, resource: bindings, ignored listing per whitelist
Jan 28 12:03:43.455: INFO: namespace e2e-tests-projected-4b784 deletion completed in 24.254179962s

• [SLOW TEST:39.257 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:03:43.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005
Jan 28 12:03:43.698: INFO: Pod name my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005: Found 0 pods out of 1
Jan 28 12:03:48.889: INFO: Pod name my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005: Found 1 pods out of 1
Jan 28 12:03:48.889: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005" are running
Jan 28 12:03:52.915: INFO: Pod "my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005-mrk4l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 12:03:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 12:03:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 12:03:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 12:03:43 +0000 UTC Reason: Message:}])
Jan 28 12:03:52.915: INFO: Trying to dial the pod
Jan 28 12:03:57.956: INFO: Controller my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005: Got expected result from replica 1 [my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005-mrk4l]: "my-hostname-basic-3af4e4e3-41c6-11ea-a04a-0242ac110005-mrk4l", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:03:57.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-pzvsd" for this suite.
Jan 28 12:04:06.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:04:06.062: INFO: namespace: e2e-tests-replication-controller-pzvsd, resource: bindings, ignored listing per whitelist
Jan 28 12:04:06.174: INFO: namespace e2e-tests-replication-controller-pzvsd deletion completed in 8.209005791s

• [SLOW TEST:22.718 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:04:06.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-495b21f7-41c6-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:04:07.989: INFO: Waiting up to 5m0s for pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-h5qbl" to be "success or failure"
Jan 28 12:04:08.000: INFO: Pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.194394ms
Jan 28 12:04:10.333: INFO: Pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344147558s
Jan 28 12:04:12.364: INFO: Pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374392639s
Jan 28 12:04:14.592: INFO: Pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.603027939s
Jan 28 12:04:16.612: INFO: Pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622628552s
Jan 28 12:04:18.691: INFO: Pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.702256065s
STEP: Saw pod success
Jan 28 12:04:18.692: INFO: Pod "pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:04:18.708: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 28 12:04:18.832: INFO: Waiting for pod pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:04:18.842: INFO: Pod pod-secrets-495ccda3-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:04:18.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-h5qbl" for this suite.
Jan 28 12:04:25.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:04:25.119: INFO: namespace: e2e-tests-secrets-h5qbl, resource: bindings, ignored listing per whitelist
Jan 28 12:04:25.155: INFO: namespace e2e-tests-secrets-h5qbl deletion completed in 6.299640014s

• [SLOW TEST:18.980 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:04:25.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 28 12:04:25.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-xjjl6 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 28 12:04:37.986: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0128 12:04:36.470822    3189 log.go:172] (0xc000756210) (0xc0005b1720) Create stream\nI0128 12:04:36.471075    3189 log.go:172] (0xc000756210) (0xc0005b1720) Stream added, broadcasting: 1\nI0128 12:04:36.484304    3189 log.go:172] (0xc000756210) Reply frame received for 1\nI0128 12:04:36.484429    3189 log.go:172] (0xc000756210) (0xc000a9e640) Create stream\nI0128 12:04:36.484453    3189 log.go:172] (0xc000756210) (0xc000a9e640) Stream added, broadcasting: 3\nI0128 12:04:36.485782    3189 log.go:172] (0xc000756210) Reply frame received for 3\nI0128 12:04:36.485980    3189 log.go:172] (0xc000756210) (0xc0008e4000) Create stream\nI0128 12:04:36.485996    3189 log.go:172] (0xc000756210) (0xc0008e4000) Stream added, broadcasting: 5\nI0128 12:04:36.487846    3189 log.go:172] (0xc000756210) Reply frame received for 5\nI0128 12:04:36.487901    3189 log.go:172] (0xc000756210) (0xc0008e40a0) Create stream\nI0128 12:04:36.487927    3189 log.go:172] (0xc000756210) (0xc0008e40a0) Stream added, broadcasting: 7\nI0128 12:04:36.499745    3189 log.go:172] (0xc000756210) Reply frame received for 7\nI0128 12:04:36.500085    3189 log.go:172] (0xc000a9e640) (3) Writing data frame\nI0128 12:04:36.500281    3189 log.go:172] (0xc000a9e640) (3) Writing data frame\nI0128 12:04:36.503826    3189 log.go:172] (0xc000756210) Data frame received for 5\nI0128 12:04:36.503857    3189 log.go:172] (0xc0008e4000) (5) Data frame handling\nI0128 12:04:36.503877    3189 log.go:172] (0xc0008e4000) (5) Data frame sent\nI0128 12:04:36.510154    3189 log.go:172] (0xc000756210) Data frame received for 5\nI0128 12:04:36.510203    3189 log.go:172] (0xc0008e4000) (5) Data frame handling\nI0128 12:04:36.510223    3189 log.go:172] (0xc0008e4000) (5) Data frame sent\nI0128 12:04:37.916127    3189 log.go:172] (0xc000756210) (0xc000a9e640) Stream removed, broadcasting: 3\nI0128 12:04:37.916727    3189 log.go:172] (0xc000756210) Data frame received for 1\nI0128 12:04:37.916827    3189 log.go:172] (0xc000756210) (0xc0008e4000) Stream removed, broadcasting: 5\nI0128 12:04:37.916942    3189 log.go:172] (0xc0005b1720) (1) Data frame handling\nI0128 12:04:37.917002    3189 log.go:172] (0xc0005b1720) (1) Data frame sent\nI0128 12:04:37.917039    3189 log.go:172] (0xc000756210) (0xc0005b1720) Stream removed, broadcasting: 1\nI0128 12:04:37.918338    3189 log.go:172] (0xc000756210) (0xc0008e40a0) Stream removed, broadcasting: 7\nI0128 12:04:37.918498    3189 log.go:172] (0xc000756210) Go away received\nI0128 12:04:37.918913    3189 log.go:172] (0xc000756210) (0xc0005b1720) Stream removed, broadcasting: 1\nI0128 12:04:37.918974    3189 log.go:172] (0xc000756210) (0xc000a9e640) Stream removed, broadcasting: 3\nI0128 12:04:37.918991    3189 log.go:172] (0xc000756210) (0xc0008e4000) Stream removed, broadcasting: 5\nI0128 12:04:37.919010    3189 log.go:172] (0xc000756210) (0xc0008e40a0) Stream removed, broadcasting: 7\n"
Jan 28 12:04:37.986: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:04:40.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xjjl6" for this suite.
Jan 28 12:04:46.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:04:46.502: INFO: namespace: e2e-tests-kubectl-xjjl6, resource: bindings, ignored listing per whitelist
Jan 28 12:04:46.648: INFO: namespace e2e-tests-kubectl-xjjl6 deletion completed in 6.418137411s

• [SLOW TEST:21.493 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:04:46.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 28 12:04:46.943: INFO: Waiting up to 5m0s for pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-containers-df8pd" to be "success or failure"
Jan 28 12:04:46.955: INFO: Pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.227041ms
Jan 28 12:04:49.083: INFO: Pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140634389s
Jan 28 12:04:51.105: INFO: Pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162407826s
Jan 28 12:04:53.393: INFO: Pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450099507s
Jan 28 12:04:55.450: INFO: Pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.506837623s
Jan 28 12:04:57.465: INFO: Pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.521807453s
STEP: Saw pod success
Jan 28 12:04:57.465: INFO: Pod "client-containers-60a95769-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:04:57.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-60a95769-41c6-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:04:57.628: INFO: Waiting for pod client-containers-60a95769-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:04:57.638: INFO: Pod client-containers-60a95769-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:04:57.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-df8pd" for this suite.
Jan 28 12:05:03.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:05:03.887: INFO: namespace: e2e-tests-containers-df8pd, resource: bindings, ignored listing per whitelist
Jan 28 12:05:04.133: INFO: namespace e2e-tests-containers-df8pd deletion completed in 6.478492639s

• [SLOW TEST:17.485 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:05:04.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-6b07e6ec-41c6-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:05:04.339: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-d9pl9" to be "success or failure"
Jan 28 12:05:04.412: INFO: Pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 72.692608ms
Jan 28 12:05:06.432: INFO: Pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092786773s
Jan 28 12:05:08.478: INFO: Pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138394364s
Jan 28 12:05:10.731: INFO: Pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391182934s
Jan 28 12:05:12.745: INFO: Pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405876642s
Jan 28 12:05:14.765: INFO: Pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425867444s
STEP: Saw pod success
Jan 28 12:05:14.766: INFO: Pod "pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:05:14.771: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 28 12:05:14.906: INFO: Waiting for pod pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:05:14.915: INFO: Pod pod-projected-secrets-6b0895ed-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:05:14.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d9pl9" for this suite.
Jan 28 12:05:21.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:05:21.281: INFO: namespace: e2e-tests-projected-d9pl9, resource: bindings, ignored listing per whitelist
Jan 28 12:05:21.323: INFO: namespace e2e-tests-projected-d9pl9 deletion completed in 6.293321279s

• [SLOW TEST:17.188 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:05:21.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 28 12:05:21.546: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-rhbmb" to be "success or failure"
Jan 28 12:05:21.650: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 103.079685ms
Jan 28 12:05:23.671: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124001935s
Jan 28 12:05:25.687: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140425009s
Jan 28 12:05:28.064: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516906811s
Jan 28 12:05:30.089: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542170088s
Jan 28 12:05:32.104: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.55730208s
Jan 28 12:05:34.207: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.660112259s
Jan 28 12:05:36.226: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.679667339s
STEP: Saw pod success
Jan 28 12:05:36.227: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 28 12:05:36.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 28 12:05:37.083: INFO: Waiting for pod pod-host-path-test to disappear
Jan 28 12:05:37.110: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:05:37.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-rhbmb" for this suite.
Jan 28 12:05:43.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:05:43.454: INFO: namespace: e2e-tests-hostpath-rhbmb, resource: bindings, ignored listing per whitelist
Jan 28 12:05:43.523: INFO: namespace e2e-tests-hostpath-rhbmb deletion completed in 6.399027634s

• [SLOW TEST:22.199 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:05:43.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xrdrm
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 12:05:43.804: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 28 12:06:18.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xrdrm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 12:06:18.117: INFO: >>> kubeConfig: /root/.kube/config
I0128 12:06:18.234039       8 log.go:172] (0xc0023302c0) (0xc000a94dc0) Create stream
I0128 12:06:18.234180       8 log.go:172] (0xc0023302c0) (0xc000a94dc0) Stream added, broadcasting: 1
I0128 12:06:18.248666       8 log.go:172] (0xc0023302c0) Reply frame received for 1
I0128 12:06:18.248756       8 log.go:172] (0xc0023302c0) (0xc00182ff40) Create stream
I0128 12:06:18.248793       8 log.go:172] (0xc0023302c0) (0xc00182ff40) Stream added, broadcasting: 3
I0128 12:06:18.251123       8 log.go:172] (0xc0023302c0) Reply frame received for 3
I0128 12:06:18.251168       8 log.go:172] (0xc0023302c0) (0xc000a94e60) Create stream
I0128 12:06:18.251189       8 log.go:172] (0xc0023302c0) (0xc000a94e60) Stream added, broadcasting: 5
I0128 12:06:18.253792       8 log.go:172] (0xc0023302c0) Reply frame received for 5
I0128 12:06:18.397312       8 log.go:172] (0xc0023302c0) Data frame received for 3
I0128 12:06:18.397397       8 log.go:172] (0xc00182ff40) (3) Data frame handling
I0128 12:06:18.397424       8 log.go:172] (0xc00182ff40) (3) Data frame sent
I0128 12:06:18.594814       8 log.go:172] (0xc0023302c0) Data frame received for 1
I0128 12:06:18.594988       8 log.go:172] (0xc000a94dc0) (1) Data frame handling
I0128 12:06:18.595034       8 log.go:172] (0xc000a94dc0) (1) Data frame sent
I0128 12:06:18.595087       8 log.go:172] (0xc0023302c0) (0xc000a94dc0) Stream removed, broadcasting: 1
I0128 12:06:18.597694       8 log.go:172] (0xc0023302c0) (0xc00182ff40) Stream removed, broadcasting: 3
I0128 12:06:18.597783       8 log.go:172] (0xc0023302c0) (0xc000a94e60) Stream removed, broadcasting: 5
I0128 12:06:18.597938       8 log.go:172] (0xc0023302c0) (0xc000a94dc0) Stream removed, broadcasting: 1
I0128 12:06:18.597958       8 log.go:172] (0xc0023302c0) (0xc00182ff40) Stream removed, broadcasting: 3
I0128 12:06:18.597982       8 log.go:172] (0xc0023302c0) Go away received
I0128 12:06:18.598036       8 log.go:172] (0xc0023302c0) (0xc000a94e60) Stream removed, broadcasting: 5
Jan 28 12:06:18.598: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:06:18.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xrdrm" for this suite.
Jan 28 12:06:42.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:06:42.834: INFO: namespace: e2e-tests-pod-network-test-xrdrm, resource: bindings, ignored listing per whitelist
Jan 28 12:06:42.854: INFO: namespace e2e-tests-pod-network-test-xrdrm deletion completed in 24.2261612s

• [SLOW TEST:59.330 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:06:42.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-mnd8t;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-mnd8t;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mnd8t.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 249.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.249_udp@PTR;check="$$(dig +tcp +noall +answer +search 249.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.249_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-mnd8t;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-mnd8t;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-mnd8t.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mnd8t.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 249.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.249_udp@PTR;check="$$(dig +tcp +noall +answer +search 249.98.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.98.249_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 12:06:59.343: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.360: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.375: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-mnd8t from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.392: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mnd8t from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.403: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.416: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.424: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.437: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.444: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.451: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.458: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.468: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.475: INFO: Unable to read 10.110.98.249_udp@PTR from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.485: INFO: Unable to read 10.110.98.249_tcp@PTR from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.540: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.547: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.552: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.558: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.562: INFO: Unable to read 10.110.98.249_udp@PTR from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.567: INFO: Unable to read 10.110.98.249_tcp@PTR from pod e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005)
Jan 28 12:06:59.567: INFO: Lookups using e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-mnd8t wheezy_tcp@dns-test-service.e2e-tests-dns-mnd8t wheezy_udp@dns-test-service.e2e-tests-dns-mnd8t.svc wheezy_tcp@dns-test-service.e2e-tests-dns-mnd8t.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-mnd8t.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.110.98.249_udp@PTR 10.110.98.249_tcp@PTR jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-mnd8t.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.110.98.249_udp@PTR 10.110.98.249_tcp@PTR]

Jan 28 12:07:04.930: INFO: DNS probes using e2e-tests-dns-mnd8t/dns-test-a5fb9a96-41c6-11ea-a04a-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:07:05.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-mnd8t" for this suite.
Jan 28 12:07:12.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:07:12.882: INFO: namespace: e2e-tests-dns-mnd8t, resource: bindings, ignored listing per whitelist
Jan 28 12:07:12.915: INFO: namespace e2e-tests-dns-mnd8t deletion completed in 6.400039723s

• [SLOW TEST:30.061 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:07:12.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 28 12:07:13.073: INFO: Waiting up to 5m0s for pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-4qjjl" to be "success or failure"
Jan 28 12:07:13.085: INFO: Pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.993704ms
Jan 28 12:07:15.337: INFO: Pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263603876s
Jan 28 12:07:17.347: INFO: Pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273718623s
Jan 28 12:07:19.658: INFO: Pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584878511s
Jan 28 12:07:21.671: INFO: Pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.59750296s
Jan 28 12:07:23.683: INFO: Pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.609676671s
STEP: Saw pod success
Jan 28 12:07:23.683: INFO: Pod "pod-b7c3f33d-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:07:23.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b7c3f33d-41c6-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:07:24.640: INFO: Waiting for pod pod-b7c3f33d-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:07:24.680: INFO: Pod pod-b7c3f33d-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:07:24.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4qjjl" for this suite.
Jan 28 12:07:30.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:07:31.063: INFO: namespace: e2e-tests-emptydir-4qjjl, resource: bindings, ignored listing per whitelist
Jan 28 12:07:31.116: INFO: namespace e2e-tests-emptydir-4qjjl deletion completed in 6.418365508s

• [SLOW TEST:18.201 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:07:31.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0128 12:07:34.803171       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 12:07:34.803: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:07:34.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vnc2w" for this suite.
Jan 28 12:07:41.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:07:41.251: INFO: namespace: e2e-tests-gc-vnc2w, resource: bindings, ignored listing per whitelist
Jan 28 12:07:41.257: INFO: namespace e2e-tests-gc-vnc2w deletion completed in 6.442542586s

• [SLOW TEST:10.141 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:07:41.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 12:07:41.582: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 28 12:07:47.317: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 12:07:51.344: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 28 12:07:51.395: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-s26rl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s26rl/deployments/test-cleanup-deployment,UID:ce976d9e-41c6-11ea-a994-fa163e34d433,ResourceVersion:19743779,Generation:1,CreationTimestamp:2020-01-28 12:07:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 28 12:07:51.402: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:07:51.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-s26rl" for this suite.
Jan 28 12:07:59.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:07:59.700: INFO: namespace: e2e-tests-deployment-s26rl, resource: bindings, ignored listing per whitelist
Jan 28 12:07:59.755: INFO: namespace e2e-tests-deployment-s26rl deletion completed in 8.195480733s

• [SLOW TEST:18.497 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:07:59.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 28 12:08:01.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:01.287: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 12:08:01.287: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 28 12:08:01.303: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 28 12:08:01.515: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 28 12:08:01.547: INFO: scanned /root for discovery docs: 
Jan 28 12:08:01.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:26.430: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 28 12:08:26.431: INFO: stdout: "Created e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089\nScaling up e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 28 12:08:26.431: INFO: stdout: "Created e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089\nScaling up e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 28 12:08:26.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:26.658: INFO: stderr: ""
Jan 28 12:08:26.658: INFO: stdout: "e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089-m796x e2e-test-nginx-rc-pl2fq "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 12:08:31.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:31.897: INFO: stderr: ""
Jan 28 12:08:31.897: INFO: stdout: "e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089-m796x e2e-test-nginx-rc-pl2fq "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 12:08:36.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:37.121: INFO: stderr: ""
Jan 28 12:08:37.122: INFO: stdout: "e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089-m796x "
Jan 28 12:08:37.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089-m796x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:37.243: INFO: stderr: ""
Jan 28 12:08:37.243: INFO: stdout: "true"
Jan 28 12:08:37.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089-m796x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:37.351: INFO: stderr: ""
Jan 28 12:08:37.351: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 28 12:08:37.352: INFO: e2e-test-nginx-rc-b12a44dbba48e682b279eeb8453b7089-m796x is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 28 12:08:37.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dhrsn'
Jan 28 12:08:37.517: INFO: stderr: ""
Jan 28 12:08:37.517: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:08:37.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dhrsn" for this suite.
Jan 28 12:09:01.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:09:01.702: INFO: namespace: e2e-tests-kubectl-dhrsn, resource: bindings, ignored listing per whitelist
Jan 28 12:09:01.866: INFO: namespace e2e-tests-kubectl-dhrsn deletion completed in 24.309193285s

• [SLOW TEST:62.110 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:09:01.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f8bf349f-41c6-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:09:02.273: INFO: Waiting up to 5m0s for pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-2swg8" to be "success or failure"
Jan 28 12:09:02.334: INFO: Pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.161846ms
Jan 28 12:09:04.347: INFO: Pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073369215s
Jan 28 12:09:06.359: INFO: Pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086166319s
Jan 28 12:09:08.477: INFO: Pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203671216s
Jan 28 12:09:10.936: INFO: Pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.663113036s
Jan 28 12:09:12.950: INFO: Pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.677059967s
STEP: Saw pod success
Jan 28 12:09:12.950: INFO: Pod "pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:09:12.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 28 12:09:13.090: INFO: Waiting for pod pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005 to disappear
Jan 28 12:09:13.106: INFO: Pod pod-secrets-f8d74578-41c6-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:09:13.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2swg8" for this suite.
Jan 28 12:09:19.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:09:19.218: INFO: namespace: e2e-tests-secrets-2swg8, resource: bindings, ignored listing per whitelist
Jan 28 12:09:19.252: INFO: namespace e2e-tests-secrets-2swg8 deletion completed in 6.134612872s
STEP: Destroying namespace "e2e-tests-secret-namespace-r4c65" for this suite.
Jan 28 12:09:25.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:09:25.458: INFO: namespace: e2e-tests-secret-namespace-r4c65, resource: bindings, ignored listing per whitelist
Jan 28 12:09:25.587: INFO: namespace e2e-tests-secret-namespace-r4c65 deletion completed in 6.334904377s

• [SLOW TEST:23.721 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:09:25.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 28 12:09:36.618: INFO: Successfully updated pod "labelsupdate06e32c42-41c7-11ea-a04a-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:09:38.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z5lrs" for this suite.
Jan 28 12:10:02.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:10:03.032: INFO: namespace: e2e-tests-projected-z5lrs, resource: bindings, ignored listing per whitelist
Jan 28 12:10:03.062: INFO: namespace e2e-tests-projected-z5lrs deletion completed in 24.263911277s

• [SLOW TEST:37.475 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:10:03.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-1d34be13-41c7-11ea-a04a-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:10:15.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gxlhr" for this suite.
Jan 28 12:10:39.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:10:39.588: INFO: namespace: e2e-tests-configmap-gxlhr, resource: bindings, ignored listing per whitelist
Jan 28 12:10:39.746: INFO: namespace e2e-tests-configmap-gxlhr deletion completed in 24.293037167s

• [SLOW TEST:36.684 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:10:39.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-33185aec-41c7-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:10:40.000: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-dp9b2" to be "success or failure"
Jan 28 12:10:40.007: INFO: Pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695753ms
Jan 28 12:10:42.028: INFO: Pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02772719s
Jan 28 12:10:44.049: INFO: Pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048742233s
Jan 28 12:10:46.067: INFO: Pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066718863s
Jan 28 12:10:48.080: INFO: Pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079928535s
Jan 28 12:10:50.094: INFO: Pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093843297s
STEP: Saw pod success
Jan 28 12:10:50.094: INFO: Pod "pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:10:50.100: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 12:10:50.961: INFO: Waiting for pod pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005 to disappear
Jan 28 12:10:50.973: INFO: Pod pod-projected-secrets-33191a93-41c7-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:10:50.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dp9b2" for this suite.
Jan 28 12:10:57.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:10:57.179: INFO: namespace: e2e-tests-projected-dp9b2, resource: bindings, ignored listing per whitelist
Jan 28 12:10:57.518: INFO: namespace e2e-tests-projected-dp9b2 deletion completed in 6.514353416s

• [SLOW TEST:17.772 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:10:57.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-3dcba40a-41c7-11ea-a04a-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-3dcba3a0-41c7-11ea-a04a-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 28 12:10:58.083: INFO: Waiting up to 5m0s for pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-r7l69" to be "success or failure"
Jan 28 12:10:58.102: INFO: Pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.092069ms
Jan 28 12:11:00.137: INFO: Pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054050763s
Jan 28 12:11:02.164: INFO: Pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081308719s
Jan 28 12:11:04.926: INFO: Pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.843322447s
Jan 28 12:11:06.946: INFO: Pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863019457s
Jan 28 12:11:08.964: INFO: Pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.880605742s
STEP: Saw pod success
Jan 28 12:11:08.964: INFO: Pod "projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:11:08.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan 28 12:11:09.132: INFO: Waiting for pod projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005 to disappear
Jan 28 12:11:09.145: INFO: Pod projected-volume-3dcba09a-41c7-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:11:09.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r7l69" for this suite.
Jan 28 12:11:15.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:11:15.246: INFO: namespace: e2e-tests-projected-r7l69, resource: bindings, ignored listing per whitelist
Jan 28 12:11:15.468: INFO: namespace e2e-tests-projected-r7l69 deletion completed in 6.315046933s

• [SLOW TEST:17.950 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:11:15.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-485a47cf-41c7-11ea-a04a-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-485a47cf-41c7-11ea-a04a-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:12:57.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j7dh6" for this suite.
Jan 28 12:13:21.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:13:21.279: INFO: namespace: e2e-tests-projected-j7dh6, resource: bindings, ignored listing per whitelist
Jan 28 12:13:21.384: INFO: namespace e2e-tests-projected-j7dh6 deletion completed in 24.284316487s

• [SLOW TEST:125.915 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:13:21.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 28 12:16:25.302: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:25.338: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:27.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:27.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:29.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:29.359: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:31.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:31.358: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:33.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:33.354: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:35.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:35.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:37.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:37.367: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:39.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:39.355: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:41.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:41.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:43.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:43.378: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:45.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:45.358: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:47.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:47.359: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:49.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:49.361: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:51.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:51.358: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:53.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:53.362: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:55.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:55.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:57.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:57.357: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:16:59.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:16:59.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:01.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:01.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:03.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:03.353: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:05.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:05.353: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:07.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:07.357: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:09.340: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:09.377: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:11.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:11.376: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:13.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:13.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:15.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:15.357: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:17.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:17.361: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:19.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:19.359: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:21.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:21.358: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:23.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:23.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:25.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:25.630: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:27.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:27.360: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:29.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:29.357: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:31.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:31.378: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:33.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:33.362: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:35.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:35.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:37.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:37.370: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:39.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:39.435: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:41.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:41.360: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:43.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:43.360: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:45.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:45.357: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:47.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:47.354: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:49.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:49.359: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:51.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:51.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:53.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:53.357: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:55.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:55.361: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:57.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:57.362: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:17:59.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:17:59.364: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:18:01.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:18:01.358: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:18:03.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:18:03.416: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:18:05.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:18:05.354: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:18:07.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:18:07.358: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:18:09.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:18:09.366: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:18:11.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:18:11.359: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 12:18:13.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 12:18:13.363: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:18:13.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-j5pqz" for this suite.
Jan 28 12:18:39.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:18:39.673: INFO: namespace: e2e-tests-container-lifecycle-hook-j5pqz, resource: bindings, ignored listing per whitelist
Jan 28 12:18:39.679: INFO: namespace e2e-tests-container-lifecycle-hook-j5pqz deletion completed in 26.300362187s

• [SLOW TEST:318.295 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:18:39.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:19:39.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c2wcf" for this suite.
Jan 28 12:20:04.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:20:04.093: INFO: namespace: e2e-tests-container-probe-c2wcf, resource: bindings, ignored listing per whitelist
Jan 28 12:20:04.184: INFO: namespace e2e-tests-container-probe-c2wcf deletion completed in 24.23393255s

• [SLOW TEST:84.505 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:20:04.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:20:14.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-4kqfh" for this suite.
Jan 28 12:20:56.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:20:56.743: INFO: namespace: e2e-tests-kubelet-test-4kqfh, resource: bindings, ignored listing per whitelist
Jan 28 12:20:56.832: INFO: namespace e2e-tests-kubelet-test-4kqfh deletion completed in 42.35269283s

• [SLOW TEST:52.648 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:20:56.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vkd8l
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 12:20:57.094: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 28 12:21:35.556: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-vkd8l PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 12:21:35.556: INFO: >>> kubeConfig: /root/.kube/config
I0128 12:21:35.652812       8 log.go:172] (0xc0006dfe40) (0xc000a95220) Create stream
I0128 12:21:35.653042       8 log.go:172] (0xc0006dfe40) (0xc000a95220) Stream added, broadcasting: 1
I0128 12:21:35.660230       8 log.go:172] (0xc0006dfe40) Reply frame received for 1
I0128 12:21:35.660273       8 log.go:172] (0xc0006dfe40) (0xc000a952c0) Create stream
I0128 12:21:35.660290       8 log.go:172] (0xc0006dfe40) (0xc000a952c0) Stream added, broadcasting: 3
I0128 12:21:35.661409       8 log.go:172] (0xc0006dfe40) Reply frame received for 3
I0128 12:21:35.661437       8 log.go:172] (0xc0006dfe40) (0xc000a95360) Create stream
I0128 12:21:35.661449       8 log.go:172] (0xc0006dfe40) (0xc000a95360) Stream added, broadcasting: 5
I0128 12:21:35.662689       8 log.go:172] (0xc0006dfe40) Reply frame received for 5
I0128 12:21:36.879014       8 log.go:172] (0xc0006dfe40) Data frame received for 3
I0128 12:21:36.879228       8 log.go:172] (0xc000a952c0) (3) Data frame handling
I0128 12:21:36.879293       8 log.go:172] (0xc000a952c0) (3) Data frame sent
I0128 12:21:37.075088       8 log.go:172] (0xc0006dfe40) (0xc000a952c0) Stream removed, broadcasting: 3
I0128 12:21:37.075441       8 log.go:172] (0xc0006dfe40) Data frame received for 1
I0128 12:21:37.075499       8 log.go:172] (0xc000a95220) (1) Data frame handling
I0128 12:21:37.075550       8 log.go:172] (0xc000a95220) (1) Data frame sent
I0128 12:21:37.075576       8 log.go:172] (0xc0006dfe40) (0xc000a95220) Stream removed, broadcasting: 1
I0128 12:21:37.075893       8 log.go:172] (0xc0006dfe40) (0xc000a95360) Stream removed, broadcasting: 5
I0128 12:21:37.076046       8 log.go:172] (0xc0006dfe40) Go away received
I0128 12:21:37.076313       8 log.go:172] (0xc0006dfe40) (0xc000a95220) Stream removed, broadcasting: 1
I0128 12:21:37.076365       8 log.go:172] (0xc0006dfe40) (0xc000a952c0) Stream removed, broadcasting: 3
I0128 12:21:37.076383       8 log.go:172] (0xc0006dfe40) (0xc000a95360) Stream removed, broadcasting: 5
Jan 28 12:21:37.076: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:21:37.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-vkd8l" for this suite.
Jan 28 12:22:01.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:22:01.209: INFO: namespace: e2e-tests-pod-network-test-vkd8l, resource: bindings, ignored listing per whitelist
Jan 28 12:22:01.284: INFO: namespace e2e-tests-pod-network-test-vkd8l deletion completed in 24.18651779s

• [SLOW TEST:64.451 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:22:01.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:22:01.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-mpk9v" for this suite.
Jan 28 12:22:07.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:22:07.630: INFO: namespace: e2e-tests-services-mpk9v, resource: bindings, ignored listing per whitelist
Jan 28 12:22:07.723: INFO: namespace e2e-tests-services-mpk9v deletion completed in 6.243287252s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.438 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:22:07.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:22:18.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-s8x2f" for this suite.
Jan 28 12:22:24.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:22:24.603: INFO: namespace: e2e-tests-emptydir-wrapper-s8x2f, resource: bindings, ignored listing per whitelist
Jan 28 12:22:24.687: INFO: namespace e2e-tests-emptydir-wrapper-s8x2f deletion completed in 6.37074439s

• [SLOW TEST:16.963 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:22:24.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 28 12:22:24.910: INFO: Waiting up to 5m0s for pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-x64sm" to be "success or failure"
Jan 28 12:22:25.065: INFO: Pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 154.572276ms
Jan 28 12:22:27.082: INFO: Pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172234881s
Jan 28 12:22:29.105: INFO: Pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19478476s
Jan 28 12:22:31.134: INFO: Pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223753514s
Jan 28 12:22:33.145: INFO: Pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.234422989s
Jan 28 12:22:35.199: INFO: Pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.289210152s
STEP: Saw pod success
Jan 28 12:22:35.200: INFO: Pod "downward-api-d7418c07-41c8-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:22:35.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d7418c07-41c8-11ea-a04a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 28 12:22:35.911: INFO: Waiting for pod downward-api-d7418c07-41c8-11ea-a04a-0242ac110005 to disappear
Jan 28 12:22:36.159: INFO: Pod downward-api-d7418c07-41c8-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:22:36.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x64sm" for this suite.
Jan 28 12:22:42.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:22:42.324: INFO: namespace: e2e-tests-downward-api-x64sm, resource: bindings, ignored listing per whitelist
Jan 28 12:22:42.393: INFO: namespace e2e-tests-downward-api-x64sm deletion completed in 6.221252707s

• [SLOW TEST:17.706 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:22:42.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 12:22:42.962: INFO: Number of nodes with available pods: 0
Jan 28 12:22:42.963: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:44.524: INFO: Number of nodes with available pods: 0
Jan 28 12:22:44.524: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:44.997: INFO: Number of nodes with available pods: 0
Jan 28 12:22:44.998: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:46.003: INFO: Number of nodes with available pods: 0
Jan 28 12:22:46.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:46.991: INFO: Number of nodes with available pods: 0
Jan 28 12:22:46.991: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:48.583: INFO: Number of nodes with available pods: 0
Jan 28 12:22:48.583: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:49.034: INFO: Number of nodes with available pods: 0
Jan 28 12:22:49.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:49.997: INFO: Number of nodes with available pods: 0
Jan 28 12:22:49.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:51.036: INFO: Number of nodes with available pods: 0
Jan 28 12:22:51.036: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:22:52.101: INFO: Number of nodes with available pods: 1
Jan 28 12:22:52.101: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 28 12:22:52.174: INFO: Number of nodes with available pods: 1
Jan 28 12:22:52.174: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-w958f, will wait for the garbage collector to delete the pods
Jan 28 12:22:53.300: INFO: Deleting DaemonSet.extensions daemon-set took: 14.115125ms
Jan 28 12:22:55.301: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.000862424s
Jan 28 12:23:00.345: INFO: Number of nodes with available pods: 0
Jan 28 12:23:00.346: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 12:23:00.356: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-w958f/daemonsets","resourceVersion":"19745382"},"items":null}

Jan 28 12:23:00.362: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-w958f/pods","resourceVersion":"19745382"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:23:00.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-w958f" for this suite.
Jan 28 12:23:06.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:23:06.578: INFO: namespace: e2e-tests-daemonsets-w958f, resource: bindings, ignored listing per whitelist
Jan 28 12:23:06.644: INFO: namespace e2e-tests-daemonsets-w958f deletion completed in 6.259908166s

• [SLOW TEST:24.251 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:23:06.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 12:23:06.908: INFO: Number of nodes with available pods: 0
Jan 28 12:23:06.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:07.932: INFO: Number of nodes with available pods: 0
Jan 28 12:23:07.932: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:08.930: INFO: Number of nodes with available pods: 0
Jan 28 12:23:08.930: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:09.946: INFO: Number of nodes with available pods: 0
Jan 28 12:23:09.946: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:10.936: INFO: Number of nodes with available pods: 0
Jan 28 12:23:10.937: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:12.448: INFO: Number of nodes with available pods: 0
Jan 28 12:23:12.448: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:12.938: INFO: Number of nodes with available pods: 0
Jan 28 12:23:12.938: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:13.932: INFO: Number of nodes with available pods: 0
Jan 28 12:23:13.932: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:14.925: INFO: Number of nodes with available pods: 0
Jan 28 12:23:14.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:15.980: INFO: Number of nodes with available pods: 0
Jan 28 12:23:15.981: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:16.947: INFO: Number of nodes with available pods: 1
Jan 28 12:23:16.947: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 28 12:23:17.082: INFO: Number of nodes with available pods: 0
Jan 28 12:23:17.082: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:18.139: INFO: Number of nodes with available pods: 0
Jan 28 12:23:18.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:19.477: INFO: Number of nodes with available pods: 0
Jan 28 12:23:19.477: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:20.185: INFO: Number of nodes with available pods: 0
Jan 28 12:23:20.186: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:21.348: INFO: Number of nodes with available pods: 0
Jan 28 12:23:21.349: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:22.112: INFO: Number of nodes with available pods: 0
Jan 28 12:23:22.113: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:23.103: INFO: Number of nodes with available pods: 0
Jan 28 12:23:23.103: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:24.118: INFO: Number of nodes with available pods: 0
Jan 28 12:23:24.119: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:25.166: INFO: Number of nodes with available pods: 0
Jan 28 12:23:25.166: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:26.110: INFO: Number of nodes with available pods: 0
Jan 28 12:23:26.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:27.104: INFO: Number of nodes with available pods: 0
Jan 28 12:23:27.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:28.116: INFO: Number of nodes with available pods: 0
Jan 28 12:23:28.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:29.106: INFO: Number of nodes with available pods: 0
Jan 28 12:23:29.106: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:30.185: INFO: Number of nodes with available pods: 0
Jan 28 12:23:30.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:31.104: INFO: Number of nodes with available pods: 0
Jan 28 12:23:31.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:32.114: INFO: Number of nodes with available pods: 0
Jan 28 12:23:32.114: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:33.102: INFO: Number of nodes with available pods: 0
Jan 28 12:23:33.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:34.102: INFO: Number of nodes with available pods: 0
Jan 28 12:23:34.103: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:35.190: INFO: Number of nodes with available pods: 0
Jan 28 12:23:35.190: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:36.108: INFO: Number of nodes with available pods: 0
Jan 28 12:23:36.108: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:37.418: INFO: Number of nodes with available pods: 0
Jan 28 12:23:37.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:38.108: INFO: Number of nodes with available pods: 0
Jan 28 12:23:38.108: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:39.105: INFO: Number of nodes with available pods: 0
Jan 28 12:23:39.105: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:40.147: INFO: Number of nodes with available pods: 0
Jan 28 12:23:40.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:23:41.112: INFO: Number of nodes with available pods: 1
Jan 28 12:23:41.113: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-n66jx, will wait for the garbage collector to delete the pods
Jan 28 12:23:41.215: INFO: Deleting DaemonSet.extensions daemon-set took: 31.620019ms
Jan 28 12:23:41.316: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.206581ms
Jan 28 12:23:52.725: INFO: Number of nodes with available pods: 0
Jan 28 12:23:52.725: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 12:23:52.733: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-n66jx/daemonsets","resourceVersion":"19745500"},"items":null}

Jan 28 12:23:52.741: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-n66jx/pods","resourceVersion":"19745500"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:23:52.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-n66jx" for this suite.
Jan 28 12:24:00.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:24:00.963: INFO: namespace: e2e-tests-daemonsets-n66jx, resource: bindings, ignored listing per whitelist
Jan 28 12:24:01.017: INFO: namespace e2e-tests-daemonsets-n66jx deletion completed in 8.196853371s

• [SLOW TEST:54.371 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:24:01.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-kkp5
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 12:24:01.219: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kkp5" in namespace "e2e-tests-subpath-v48cv" to be "success or failure"
Jan 28 12:24:01.227: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.620331ms
Jan 28 12:24:03.340: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121116376s
Jan 28 12:24:05.353: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134051543s
Jan 28 12:24:07.686: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46714953s
Jan 28 12:24:09.706: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4868391s
Jan 28 12:24:11.721: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.502107401s
Jan 28 12:24:13.739: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.519917293s
Jan 28 12:24:15.753: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.53357366s
Jan 28 12:24:17.773: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.554234056s
Jan 28 12:24:19.842: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 18.622334096s
Jan 28 12:24:21.876: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 20.6571685s
Jan 28 12:24:23.941: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 22.721521907s
Jan 28 12:24:25.959: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 24.739539083s
Jan 28 12:24:27.973: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 26.75346113s
Jan 28 12:24:29.988: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 28.768608165s
Jan 28 12:24:32.003: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 30.78403499s
Jan 28 12:24:34.029: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Running", Reason="", readiness=false. Elapsed: 32.810243709s
Jan 28 12:24:36.059: INFO: Pod "pod-subpath-test-secret-kkp5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.840285375s
STEP: Saw pod success
Jan 28 12:24:36.060: INFO: Pod "pod-subpath-test-secret-kkp5" satisfied condition "success or failure"
Jan 28 12:24:36.083: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-kkp5 container test-container-subpath-secret-kkp5: 
STEP: delete the pod
Jan 28 12:24:36.381: INFO: Waiting for pod pod-subpath-test-secret-kkp5 to disappear
Jan 28 12:24:36.412: INFO: Pod pod-subpath-test-secret-kkp5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-kkp5
Jan 28 12:24:36.412: INFO: Deleting pod "pod-subpath-test-secret-kkp5" in namespace "e2e-tests-subpath-v48cv"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:24:36.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-v48cv" for this suite.
Jan 28 12:24:44.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:24:44.717: INFO: namespace: e2e-tests-subpath-v48cv, resource: bindings, ignored listing per whitelist
Jan 28 12:24:44.815: INFO: namespace e2e-tests-subpath-v48cv deletion completed in 8.38627051s

• [SLOW TEST:43.798 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:24:44.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 28 12:24:44.993: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:25:02.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-s78dn" for this suite.
Jan 28 12:25:08.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:25:08.852: INFO: namespace: e2e-tests-init-container-s78dn, resource: bindings, ignored listing per whitelist
Jan 28 12:25:08.880: INFO: namespace e2e-tests-init-container-s78dn deletion completed in 6.381477194s

• [SLOW TEST:24.064 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:25:08.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j2ljc
Jan 28 12:25:19.123: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j2ljc
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 12:25:19.128: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:29:20.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j2ljc" for this suite.
Jan 28 12:29:28.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:29:29.029: INFO: namespace: e2e-tests-container-probe-j2ljc, resource: bindings, ignored listing per whitelist
Jan 28 12:29:29.079: INFO: namespace e2e-tests-container-probe-j2ljc deletion completed in 8.300299484s

• [SLOW TEST:260.199 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:29:29.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 28 12:29:29.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z6b8k'
Jan 28 12:29:31.221: INFO: stderr: ""
Jan 28 12:29:31.222: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 28 12:29:41.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z6b8k -o json'
Jan 28 12:29:41.452: INFO: stderr: ""
Jan 28 12:29:41.452: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-28T12:29:31Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-z6b8k\",\n        \"resourceVersion\": \"19746035\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-z6b8k/pods/e2e-test-nginx-pod\",\n        \"uid\": \"d55661dd-41c9-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-wl5j4\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-wl5j4\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-wl5j4\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T12:29:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T12:29:40Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T12:29:40Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T12:29:31Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://4bb31d40f0658eedb556a072aa998f6f44f80aa7d3803bb9fa02c03de2f7ce8f\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-28T12:29:39Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-28T12:29:31Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 28 12:29:41.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-z6b8k'
Jan 28 12:29:41.825: INFO: stderr: ""
Jan 28 12:29:41.826: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 28 12:29:41.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-z6b8k'
Jan 28 12:29:50.465: INFO: stderr: ""
Jan 28 12:29:50.466: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:29:50.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z6b8k" for this suite.
Jan 28 12:29:56.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:29:56.715: INFO: namespace: e2e-tests-kubectl-z6b8k, resource: bindings, ignored listing per whitelist
Jan 28 12:29:56.809: INFO: namespace e2e-tests-kubectl-z6b8k deletion completed in 6.215782232s

• [SLOW TEST:27.730 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:29:56.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 28 12:29:56.953: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 12:29:57.043: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 12:29:57.048: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 28 12:29:57.077: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 28 12:29:57.077: INFO: 	Container weave ready: true, restart count 0
Jan 28 12:29:57.077: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 12:29:57.077: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 28 12:29:57.077: INFO: 	Container coredns ready: true, restart count 0
Jan 28 12:29:57.077: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 28 12:29:57.077: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 28 12:29:57.077: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 28 12:29:57.077: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 28 12:29:57.077: INFO: 	Container coredns ready: true, restart count 0
Jan 28 12:29:57.077: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 28 12:29:57.077: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 12:29:57.077: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 28 12:29:57.225: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e4debda1-41c9-11ea-a04a-0242ac110005.15ee0ccdca9191e9], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-qnctr/filler-pod-e4debda1-41c9-11ea-a04a-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e4debda1-41c9-11ea-a04a-0242ac110005.15ee0ccee2d0d21b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e4debda1-41c9-11ea-a04a-0242ac110005.15ee0ccf8c2c2bcc], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e4debda1-41c9-11ea-a04a-0242ac110005.15ee0ccfbe79782c], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ee0cd02344f283], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:30:08.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qnctr" for this suite.
Jan 28 12:30:16.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:30:16.834: INFO: namespace: e2e-tests-sched-pred-qnctr, resource: bindings, ignored listing per whitelist
Jan 28 12:30:16.887: INFO: namespace e2e-tests-sched-pred-qnctr deletion completed in 8.224926673s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.077 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:30:16.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-7mtcx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7mtcx to expose endpoints map[]
Jan 28 12:30:18.137: INFO: Get endpoints failed (312.29139ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 28 12:30:19.160: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7mtcx exposes endpoints map[] (1.335250091s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-7mtcx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7mtcx to expose endpoints map[pod1:[100]]
Jan 28 12:30:23.392: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.208227112s elapsed, will retry)
Jan 28 12:30:27.898: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7mtcx exposes endpoints map[pod1:[100]] (8.714506378s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-7mtcx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7mtcx to expose endpoints map[pod1:[100] pod2:[101]]
Jan 28 12:30:34.089: INFO: Unexpected endpoints: found map[f1f366a0-41c9-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (6.178964167s elapsed, will retry)
Jan 28 12:30:37.159: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7mtcx exposes endpoints map[pod2:[101] pod1:[100]] (9.249090629s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-7mtcx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7mtcx to expose endpoints map[pod2:[101]]
Jan 28 12:30:38.521: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7mtcx exposes endpoints map[pod2:[101]] (1.341763492s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-7mtcx
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7mtcx to expose endpoints map[]
Jan 28 12:30:38.799: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7mtcx exposes endpoints map[] (98.067146ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:30:39.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-7mtcx" for this suite.
Jan 28 12:31:03.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:31:04.293: INFO: namespace: e2e-tests-services-7mtcx, resource: bindings, ignored listing per whitelist
Jan 28 12:31:04.648: INFO: namespace e2e-tests-services-7mtcx deletion completed in 25.445079296s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:47.761 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:31:04.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 28 12:31:25.148: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:25.244: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:27.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:27.842: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:29.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:29.261: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:31.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:31.257: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:33.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:33.258: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:35.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:35.253: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:37.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:37.259: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:39.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:39.267: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:41.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:41.262: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 12:31:43.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 12:31:43.262: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:31:43.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8xbtk" for this suite.
Jan 28 12:32:07.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:32:07.506: INFO: namespace: e2e-tests-container-lifecycle-hook-8xbtk, resource: bindings, ignored listing per whitelist
Jan 28 12:32:07.549: INFO: namespace e2e-tests-container-lifecycle-hook-8xbtk deletion completed in 24.228390901s

• [SLOW TEST:62.901 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:32:07.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-djv8c
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 12:32:07.678: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 28 12:32:48.028: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-djv8c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 12:32:48.028: INFO: >>> kubeConfig: /root/.kube/config
I0128 12:32:48.195481       8 log.go:172] (0xc0006dfe40) (0xc00214c5a0) Create stream
I0128 12:32:48.195744       8 log.go:172] (0xc0006dfe40) (0xc00214c5a0) Stream added, broadcasting: 1
I0128 12:32:48.212237       8 log.go:172] (0xc0006dfe40) Reply frame received for 1
I0128 12:32:48.212323       8 log.go:172] (0xc0006dfe40) (0xc00100df40) Create stream
I0128 12:32:48.212346       8 log.go:172] (0xc0006dfe40) (0xc00100df40) Stream added, broadcasting: 3
I0128 12:32:48.214364       8 log.go:172] (0xc0006dfe40) Reply frame received for 3
I0128 12:32:48.214450       8 log.go:172] (0xc0006dfe40) (0xc00257a500) Create stream
I0128 12:32:48.214467       8 log.go:172] (0xc0006dfe40) (0xc00257a500) Stream added, broadcasting: 5
I0128 12:32:48.216418       8 log.go:172] (0xc0006dfe40) Reply frame received for 5
I0128 12:32:48.430946       8 log.go:172] (0xc0006dfe40) Data frame received for 3
I0128 12:32:48.431311       8 log.go:172] (0xc00100df40) (3) Data frame handling
I0128 12:32:48.431414       8 log.go:172] (0xc00100df40) (3) Data frame sent
I0128 12:32:48.733013       8 log.go:172] (0xc0006dfe40) Data frame received for 1
I0128 12:32:48.733261       8 log.go:172] (0xc00214c5a0) (1) Data frame handling
I0128 12:32:48.733333       8 log.go:172] (0xc00214c5a0) (1) Data frame sent
I0128 12:32:48.734297       8 log.go:172] (0xc0006dfe40) (0xc00257a500) Stream removed, broadcasting: 5
I0128 12:32:48.734373       8 log.go:172] (0xc0006dfe40) (0xc00214c5a0) Stream removed, broadcasting: 1
I0128 12:32:48.734516       8 log.go:172] (0xc0006dfe40) (0xc00100df40) Stream removed, broadcasting: 3
I0128 12:32:48.734908       8 log.go:172] (0xc0006dfe40) Go away received
I0128 12:32:48.735278       8 log.go:172] (0xc0006dfe40) (0xc00214c5a0) Stream removed, broadcasting: 1
I0128 12:32:48.735321       8 log.go:172] (0xc0006dfe40) (0xc00100df40) Stream removed, broadcasting: 3
I0128 12:32:48.735348       8 log.go:172] (0xc0006dfe40) (0xc00257a500) Stream removed, broadcasting: 5
Jan 28 12:32:48.735: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:32:48.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-djv8c" for this suite.
Jan 28 12:33:12.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:33:12.934: INFO: namespace: e2e-tests-pod-network-test-djv8c, resource: bindings, ignored listing per whitelist
Jan 28 12:33:12.959: INFO: namespace e2e-tests-pod-network-test-djv8c deletion completed in 24.194655766s

• [SLOW TEST:65.409 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:33:12.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-59b1c5ec-41ca-11ea-a04a-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-59b1c8cd-41ca-11ea-a04a-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-59b1c5ec-41ca-11ea-a04a-0242ac110005
STEP: Updating configmap cm-test-opt-upd-59b1c8cd-41ca-11ea-a04a-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-59b1c927-41ca-11ea-a04a-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:34:56.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8252n" for this suite.
Jan 28 12:35:20.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:35:20.693: INFO: namespace: e2e-tests-configmap-8252n, resource: bindings, ignored listing per whitelist
Jan 28 12:35:20.728: INFO: namespace e2e-tests-configmap-8252n deletion completed in 24.34934781s

• [SLOW TEST:127.768 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:35:20.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:35:33.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-j6hmm" for this suite.
Jan 28 12:35:39.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:35:39.392: INFO: namespace: e2e-tests-kubelet-test-j6hmm, resource: bindings, ignored listing per whitelist
Jan 28 12:35:39.410: INFO: namespace e2e-tests-kubelet-test-j6hmm deletion completed in 6.304877629s

• [SLOW TEST:18.681 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:35:39.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:35:39.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-r9nj9" to be "success or failure"
Jan 28 12:35:39.627: INFO: Pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.964158ms
Jan 28 12:35:41.651: INFO: Pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057933518s
Jan 28 12:35:43.670: INFO: Pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077026973s
Jan 28 12:35:46.349: INFO: Pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756467725s
Jan 28 12:35:48.365: INFO: Pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.772362573s
Jan 28 12:35:50.382: INFO: Pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.789214306s
STEP: Saw pod success
Jan 28 12:35:50.382: INFO: Pod "downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:35:50.386: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:35:52.010: INFO: Waiting for pod downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005 to disappear
Jan 28 12:35:52.029: INFO: Pod downwardapi-volume-b0edb8b7-41ca-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:35:52.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r9nj9" for this suite.
Jan 28 12:35:58.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:35:58.250: INFO: namespace: e2e-tests-projected-r9nj9, resource: bindings, ignored listing per whitelist
Jan 28 12:35:58.324: INFO: namespace e2e-tests-projected-r9nj9 deletion completed in 6.280579542s

• [SLOW TEST:18.914 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:35:58.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-bc301f46-41ca-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:35:58.656: INFO: Waiting up to 5m0s for pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-lnswr" to be "success or failure"
Jan 28 12:35:58.824: INFO: Pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 167.726215ms
Jan 28 12:36:01.010: INFO: Pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35406224s
Jan 28 12:36:03.025: INFO: Pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369025805s
Jan 28 12:36:05.106: INFO: Pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449290658s
Jan 28 12:36:07.120: INFO: Pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463571732s
Jan 28 12:36:09.138: INFO: Pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.481696314s
STEP: Saw pod success
Jan 28 12:36:09.138: INFO: Pod "pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:36:09.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 28 12:36:09.229: INFO: Waiting for pod pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005 to disappear
Jan 28 12:36:09.259: INFO: Pod pod-secrets-bc42eb14-41ca-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:36:09.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lnswr" for this suite.
Jan 28 12:36:15.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:36:15.966: INFO: namespace: e2e-tests-secrets-lnswr, resource: bindings, ignored listing per whitelist
Jan 28 12:36:16.016: INFO: namespace e2e-tests-secrets-lnswr deletion completed in 6.69140433s

• [SLOW TEST:17.692 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:36:16.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-98kc
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 12:36:16.221: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-98kc" in namespace "e2e-tests-subpath-8cx82" to be "success or failure"
Jan 28 12:36:16.236: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.896649ms
Jan 28 12:36:18.353: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131403512s
Jan 28 12:36:20.384: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162643322s
Jan 28 12:36:22.407: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185735855s
Jan 28 12:36:24.429: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207296828s
Jan 28 12:36:26.448: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.226716048s
Jan 28 12:36:28.748: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.526468244s
Jan 28 12:36:30.779: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.557581486s
Jan 28 12:36:32.794: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 16.572019829s
Jan 28 12:36:34.817: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 18.595263897s
Jan 28 12:36:36.855: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 20.633727808s
Jan 28 12:36:38.886: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 22.663923605s
Jan 28 12:36:40.913: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 24.691661818s
Jan 28 12:36:42.937: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 26.71564515s
Jan 28 12:36:44.962: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 28.740600891s
Jan 28 12:36:46.986: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 30.76398499s
Jan 28 12:36:49.247: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Running", Reason="", readiness=false. Elapsed: 33.025694081s
Jan 28 12:36:51.265: INFO: Pod "pod-subpath-test-configmap-98kc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.043524945s
STEP: Saw pod success
Jan 28 12:36:51.265: INFO: Pod "pod-subpath-test-configmap-98kc" satisfied condition "success or failure"
Jan 28 12:36:51.273: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-98kc container test-container-subpath-configmap-98kc: 
STEP: delete the pod
Jan 28 12:36:51.516: INFO: Waiting for pod pod-subpath-test-configmap-98kc to disappear
Jan 28 12:36:51.788: INFO: Pod pod-subpath-test-configmap-98kc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-98kc
Jan 28 12:36:51.789: INFO: Deleting pod "pod-subpath-test-configmap-98kc" in namespace "e2e-tests-subpath-8cx82"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:36:51.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-8cx82" for this suite.
Jan 28 12:36:57.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:36:57.961: INFO: namespace: e2e-tests-subpath-8cx82, resource: bindings, ignored listing per whitelist
Jan 28 12:36:58.026: INFO: namespace e2e-tests-subpath-8cx82 deletion completed in 6.203266515s

• [SLOW TEST:42.009 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:36:58.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-dfda3d6a-41ca-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:36:58.433: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-bznh4" to be "success or failure"
Jan 28 12:36:58.474: INFO: Pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.599569ms
Jan 28 12:37:00.498: INFO: Pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064482953s
Jan 28 12:37:02.529: INFO: Pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095937472s
Jan 28 12:37:04.636: INFO: Pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20222811s
Jan 28 12:37:06.730: INFO: Pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296063285s
Jan 28 12:37:09.532: INFO: Pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.098482033s
STEP: Saw pod success
Jan 28 12:37:09.532: INFO: Pod "pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:37:09.541: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 12:37:09.768: INFO: Waiting for pod pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005 to disappear
Jan 28 12:37:09.776: INFO: Pod pod-projected-secrets-dfe12ceb-41ca-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:37:09.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bznh4" for this suite.
Jan 28 12:37:15.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:37:15.981: INFO: namespace: e2e-tests-projected-bznh4, resource: bindings, ignored listing per whitelist
Jan 28 12:37:15.997: INFO: namespace e2e-tests-projected-bznh4 deletion completed in 6.210845996s

• [SLOW TEST:17.970 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:37:15.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:37:29.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-qmvc9" for this suite.
Jan 28 12:37:53.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:37:53.521: INFO: namespace: e2e-tests-replication-controller-qmvc9, resource: bindings, ignored listing per whitelist
Jan 28 12:37:53.548: INFO: namespace e2e-tests-replication-controller-qmvc9 deletion completed in 24.233472875s

• [SLOW TEST:37.549 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:37:53.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-00e7f9f5-41cb-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:37:53.813: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-dxzjt" to be "success or failure"
Jan 28 12:37:53.915: INFO: Pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 101.436345ms
Jan 28 12:37:55.930: INFO: Pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116503483s
Jan 28 12:37:57.948: INFO: Pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134917794s
Jan 28 12:38:00.620: INFO: Pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.806957273s
Jan 28 12:38:02.636: INFO: Pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.822642686s
Jan 28 12:38:04.704: INFO: Pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.890558416s
STEP: Saw pod success
Jan 28 12:38:04.705: INFO: Pod "pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:38:04.749: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 12:38:05.083: INFO: Waiting for pod pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:38:05.124: INFO: Pod pod-projected-secrets-00e940cd-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:38:05.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dxzjt" for this suite.
Jan 28 12:38:11.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:38:11.491: INFO: namespace: e2e-tests-projected-dxzjt, resource: bindings, ignored listing per whitelist
Jan 28 12:38:11.505: INFO: namespace e2e-tests-projected-dxzjt deletion completed in 6.366180008s

• [SLOW TEST:17.957 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:38:11.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 12:38:37.986: INFO: Container started at 2020-01-28 12:38:19 +0000 UTC, pod became ready at 2020-01-28 12:38:36 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:38:37.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hbs7t" for this suite.
Jan 28 12:39:02.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:39:02.222: INFO: namespace: e2e-tests-container-probe-hbs7t, resource: bindings, ignored listing per whitelist
Jan 28 12:39:02.283: INFO: namespace e2e-tests-container-probe-hbs7t deletion completed in 24.289403747s

• [SLOW TEST:50.778 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:39:02.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 28 12:39:12.949: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:39:38.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-nsfwd" for this suite.
Jan 28 12:39:44.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:39:44.329: INFO: namespace: e2e-tests-namespaces-nsfwd, resource: bindings, ignored listing per whitelist
Jan 28 12:39:44.378: INFO: namespace e2e-tests-namespaces-nsfwd deletion completed in 6.223283241s
STEP: Destroying namespace "e2e-tests-nsdeletetest-9nh8n" for this suite.
Jan 28 12:39:44.381: INFO: Namespace e2e-tests-nsdeletetest-9nh8n was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-gks9x" for this suite.
Jan 28 12:39:50.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:39:50.545: INFO: namespace: e2e-tests-nsdeletetest-gks9x, resource: bindings, ignored listing per whitelist
Jan 28 12:39:50.751: INFO: namespace e2e-tests-nsdeletetest-gks9x deletion completed in 6.369421655s

• [SLOW TEST:48.467 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:39:50.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 12:39:50.887: INFO: Creating deployment "test-recreate-deployment"
Jan 28 12:39:50.896: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 28 12:39:50.981: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 28 12:39:53.146: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 28 12:39:53.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:39:55.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:39:57.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:39:59.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715811991, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:40:01.189: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 28 12:40:01.205: INFO: Updating deployment test-recreate-deployment
Jan 28 12:40:01.205: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 28 12:40:02.081: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-4x29p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4x29p/deployments/test-recreate-deployment,UID:46b86df4-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747324,Generation:2,CreationTimestamp:2020-01-28 12:39:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-28 12:40:01 +0000 UTC 2020-01-28 12:40:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-28 12:40:01 +0000 UTC 2020-01-28 12:39:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 28 12:40:02.104: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-4x29p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4x29p/replicasets/test-recreate-deployment-589c4bfd,UID:4d19af5e-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747321,Generation:1,CreationTimestamp:2020-01-28 12:40:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 46b86df4-41cb-11ea-a994-fa163e34d433 0xc00194392f 0xc001943940}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 12:40:02.104: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 28 12:40:02.105: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-4x29p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4x29p/replicasets/test-recreate-deployment-5bf7f65dc,UID:46c70b99-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747312,Generation:2,CreationTimestamp:2020-01-28 12:39:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 46b86df4-41cb-11ea-a994-fa163e34d433 0xc001943a00 0xc001943a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 12:40:02.122: INFO: Pod "test-recreate-deployment-589c4bfd-7q9bs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-7q9bs,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-4x29p,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4x29p/pods/test-recreate-deployment-589c4bfd-7q9bs,UID:4d2dfcb2-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747325,Generation:0,CreationTimestamp:2020-01-28 12:40:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 4d19af5e-41cb-11ea-a994-fa163e34d433 0xc0023e269f 0xc0023e26b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xnffw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xnffw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xnffw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023e2710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023e2730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:40:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:40:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:40:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:40:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-28 12:40:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:40:02.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-4x29p" for this suite.
Jan 28 12:40:10.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:40:10.568: INFO: namespace: e2e-tests-deployment-4x29p, resource: bindings, ignored listing per whitelist
Jan 28 12:40:10.585: INFO: namespace e2e-tests-deployment-4x29p deletion completed in 8.444181047s

• [SLOW TEST:19.834 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:40:10.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 28 12:40:22.978: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-528dd3a1-41cb-11ea-a04a-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-bb5jc", SelfLink:"/api/v1/namespaces/e2e-tests-pods-bb5jc/pods/pod-submit-remove-528dd3a1-41cb-11ea-a04a-0242ac110005", UID:"528f19e7-41cb-11ea-a994-fa163e34d433", ResourceVersion:"19747380", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715812010, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"741492238"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-znm6d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a67780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-znm6d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001859e78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b6c480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001859eb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001859ed0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001859ed8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001859edc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715812010, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715812021, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715812021, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715812010, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0026f9580), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0026f95a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://4e141e63eef619b1d5426bc0f6c30da871c2148114fdb6d1f909137981f8732b"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:40:28.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bb5jc" for this suite.
Jan 28 12:40:35.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:40:35.273: INFO: namespace: e2e-tests-pods-bb5jc, resource: bindings, ignored listing per whitelist
Jan 28 12:40:35.288: INFO: namespace e2e-tests-pods-bb5jc deletion completed in 6.311727372s

• [SLOW TEST:24.702 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:40:35.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:40:35.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-sx8nq" to be "success or failure"
Jan 28 12:40:35.433: INFO: Pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.953133ms
Jan 28 12:40:37.456: INFO: Pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030643909s
Jan 28 12:40:39.471: INFO: Pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04517127s
Jan 28 12:40:41.718: INFO: Pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.292572066s
Jan 28 12:40:44.555: INFO: Pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.129158957s
Jan 28 12:40:46.612: INFO: Pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.186333061s
STEP: Saw pod success
Jan 28 12:40:46.613: INFO: Pod "downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:40:46.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:40:47.184: INFO: Waiting for pod downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:40:47.516: INFO: Pod downwardapi-volume-614285d8-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:40:47.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sx8nq" for this suite.
Jan 28 12:40:53.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:40:53.817: INFO: namespace: e2e-tests-downward-api-sx8nq, resource: bindings, ignored listing per whitelist
Jan 28 12:40:53.845: INFO: namespace e2e-tests-downward-api-sx8nq deletion completed in 6.318264798s

• [SLOW TEST:18.557 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:40:53.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:40:54.224: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-bltmk" to be "success or failure"
Jan 28 12:40:54.229: INFO: Pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.746779ms
Jan 28 12:40:56.313: INFO: Pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089612258s
Jan 28 12:40:58.329: INFO: Pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105510056s
Jan 28 12:41:00.672: INFO: Pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448210836s
Jan 28 12:41:02.972: INFO: Pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748107637s
Jan 28 12:41:05.092: INFO: Pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.868539405s
STEP: Saw pod success
Jan 28 12:41:05.093: INFO: Pod "downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:41:05.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:41:05.414: INFO: Waiting for pod downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:41:05.442: INFO: Pod downwardapi-volume-6c75f6c5-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:41:05.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bltmk" for this suite.
Jan 28 12:41:11.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:41:11.728: INFO: namespace: e2e-tests-projected-bltmk, resource: bindings, ignored listing per whitelist
Jan 28 12:41:11.756: INFO: namespace e2e-tests-projected-bltmk deletion completed in 6.28820249s

• [SLOW TEST:17.910 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:41:11.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-77154e39-41cb-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 28 12:41:12.058: INFO: Waiting up to 5m0s for pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-secrets-57l58" to be "success or failure"
Jan 28 12:41:12.222: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 164.304299ms
Jan 28 12:41:14.269: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211233804s
Jan 28 12:41:16.291: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233131112s
Jan 28 12:41:18.601: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.542848223s
Jan 28 12:41:20.614: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55579288s
Jan 28 12:41:22.633: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.575502362s
Jan 28 12:41:24.670: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.611632632s
STEP: Saw pod success
Jan 28 12:41:24.671: INFO: Pod "pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:41:24.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 28 12:41:25.791: INFO: Waiting for pod pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:41:25.811: INFO: Pod pod-secrets-7717ce5c-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:41:25.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-57l58" for this suite.
Jan 28 12:41:31.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:41:32.019: INFO: namespace: e2e-tests-secrets-57l58, resource: bindings, ignored listing per whitelist
Jan 28 12:41:32.069: INFO: namespace e2e-tests-secrets-57l58 deletion completed in 6.230615264s

• [SLOW TEST:20.311 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:41:32.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 28 12:41:32.331: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:41:32.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gvg6l" for this suite.
Jan 28 12:41:38.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:41:38.779: INFO: namespace: e2e-tests-kubectl-gvg6l, resource: bindings, ignored listing per whitelist
Jan 28 12:41:38.882: INFO: namespace e2e-tests-kubectl-gvg6l deletion completed in 6.387143553s

• [SLOW TEST:6.812 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:41:38.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:41:39.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-gccms" to be "success or failure"
Jan 28 12:41:39.094: INFO: Pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687525ms
Jan 28 12:41:41.225: INFO: Pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137932625s
Jan 28 12:41:43.299: INFO: Pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212087035s
Jan 28 12:41:45.554: INFO: Pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46612984s
Jan 28 12:41:47.566: INFO: Pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.478787568s
Jan 28 12:41:49.579: INFO: Pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.49209656s
STEP: Saw pod success
Jan 28 12:41:49.580: INFO: Pod "downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:41:49.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:41:50.121: INFO: Waiting for pod downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:41:50.379: INFO: Pod downwardapi-volume-8732a557-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:41:50.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gccms" for this suite.
Jan 28 12:41:56.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:41:56.687: INFO: namespace: e2e-tests-projected-gccms, resource: bindings, ignored listing per whitelist
Jan 28 12:41:56.750: INFO: namespace e2e-tests-projected-gccms deletion completed in 6.35913601s

• [SLOW TEST:17.868 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:41:56.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 12:41:57.096: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.316909ms)
Jan 28 12:41:57.106: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.715824ms)
Jan 28 12:41:57.114: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.798085ms)
Jan 28 12:41:57.248: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 133.871479ms)
Jan 28 12:41:57.272: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.518845ms)
Jan 28 12:41:57.287: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.030928ms)
Jan 28 12:41:57.300: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.190194ms)
Jan 28 12:41:57.307: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.078261ms)
Jan 28 12:41:57.314: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.367312ms)
Jan 28 12:41:57.319: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.870327ms)
Jan 28 12:41:57.325: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.010576ms)
Jan 28 12:41:57.331: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.469353ms)
Jan 28 12:41:57.336: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.302963ms)
Jan 28 12:41:57.341: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.068924ms)
Jan 28 12:41:57.348: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.053737ms)
Jan 28 12:41:57.355: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.182727ms)
Jan 28 12:41:57.361: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.202489ms)
Jan 28 12:41:57.371: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.242303ms)
Jan 28 12:41:57.378: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.142441ms)
Jan 28 12:41:57.389: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.655825ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:41:57.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-sbcgw" for this suite.
Jan 28 12:42:03.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:42:03.547: INFO: namespace: e2e-tests-proxy-sbcgw, resource: bindings, ignored listing per whitelist
Jan 28 12:42:03.903: INFO: namespace e2e-tests-proxy-sbcgw deletion completed in 6.499038785s

• [SLOW TEST:7.153 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:42:03.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-9625143b-41cb-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 28 12:42:04.187: INFO: Waiting up to 5m0s for pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-j4hjx" to be "success or failure"
Jan 28 12:42:04.218: INFO: Pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.466187ms
Jan 28 12:42:06.286: INFO: Pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098423011s
Jan 28 12:42:08.329: INFO: Pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141248409s
Jan 28 12:42:10.348: INFO: Pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160475505s
Jan 28 12:42:12.564: INFO: Pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.376608857s
Jan 28 12:42:14.625: INFO: Pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.437968113s
STEP: Saw pod success
Jan 28 12:42:14.626: INFO: Pod "pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:42:14.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 28 12:42:14.940: INFO: Waiting for pod pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:42:14.963: INFO: Pod pod-configmaps-9626d3ae-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:42:14.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-j4hjx" for this suite.
Jan 28 12:42:21.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:42:21.281: INFO: namespace: e2e-tests-configmap-j4hjx, resource: bindings, ignored listing per whitelist
Jan 28 12:42:21.407: INFO: namespace e2e-tests-configmap-j4hjx deletion completed in 6.435036205s

• [SLOW TEST:17.502 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:42:21.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:42:21.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-tjfgx" to be "success or failure"
Jan 28 12:42:21.652: INFO: Pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.826562ms
Jan 28 12:42:24.118: INFO: Pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478916113s
Jan 28 12:42:26.170: INFO: Pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531227941s
Jan 28 12:42:28.204: INFO: Pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565593112s
Jan 28 12:42:30.218: INFO: Pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.579304526s
Jan 28 12:42:32.230: INFO: Pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591674238s
STEP: Saw pod success
Jan 28 12:42:32.231: INFO: Pod "downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:42:32.238: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:42:32.669: INFO: Waiting for pod downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:42:33.279: INFO: Pod downwardapi-volume-a0910600-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:42:33.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tjfgx" for this suite.
Jan 28 12:42:39.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:42:39.613: INFO: namespace: e2e-tests-projected-tjfgx, resource: bindings, ignored listing per whitelist
Jan 28 12:42:39.676: INFO: namespace e2e-tests-projected-tjfgx deletion completed in 6.374374962s

• [SLOW TEST:18.269 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:42:39.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 28 12:42:39.916: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747727,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 12:42:39.917: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747727,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 28 12:42:49.941: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747740,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 28 12:42:49.942: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747740,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 28 12:42:59.977: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747753,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 12:42:59.977: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747753,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 28 12:43:09.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747765,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 12:43:09.999: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-a,UID:ab775b03-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747765,Generation:0,CreationTimestamp:2020-01-28 12:42:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 28 12:43:20.058: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-b,UID:c35d00dc-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747778,Generation:0,CreationTimestamp:2020-01-28 12:43:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 12:43:20.059: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-b,UID:c35d00dc-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747778,Generation:0,CreationTimestamp:2020-01-28 12:43:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 28 12:43:30.084: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-b,UID:c35d00dc-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747790,Generation:0,CreationTimestamp:2020-01-28 12:43:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 12:43:30.085: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-5v2gx,SelfLink:/api/v1/namespaces/e2e-tests-watch-5v2gx/configmaps/e2e-watch-test-configmap-b,UID:c35d00dc-41cb-11ea-a994-fa163e34d433,ResourceVersion:19747790,Generation:0,CreationTimestamp:2020-01-28 12:43:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:43:40.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-5v2gx" for this suite.
Jan 28 12:43:46.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:43:46.322: INFO: namespace: e2e-tests-watch-5v2gx, resource: bindings, ignored listing per whitelist
Jan 28 12:43:46.411: INFO: namespace e2e-tests-watch-5v2gx deletion completed in 6.296963854s

• [SLOW TEST:66.734 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:43:46.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 28 12:43:57.605: INFO: Successfully updated pod "labelsupdated34d325f-41cb-11ea-a04a-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:43:59.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-p89p4" for this suite.
Jan 28 12:44:23.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:44:23.929: INFO: namespace: e2e-tests-downward-api-p89p4, resource: bindings, ignored listing per whitelist
Jan 28 12:44:23.929: INFO: namespace e2e-tests-downward-api-p89p4 deletion completed in 24.204293782s

• [SLOW TEST:37.518 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:44:23.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 28 12:44:24.189: INFO: Waiting up to 5m0s for pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-rjdjz" to be "success or failure"
Jan 28 12:44:24.349: INFO: Pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 159.132576ms
Jan 28 12:44:26.392: INFO: Pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202143052s
Jan 28 12:44:28.415: INFO: Pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225115425s
Jan 28 12:44:30.432: INFO: Pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.242776506s
Jan 28 12:44:32.443: INFO: Pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253813363s
Jan 28 12:44:34.501: INFO: Pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.311131611s
STEP: Saw pod success
Jan 28 12:44:34.501: INFO: Pod "pod-e99cc2b1-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:44:34.514: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e99cc2b1-41cb-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:44:34.655: INFO: Waiting for pod pod-e99cc2b1-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:44:34.669: INFO: Pod pod-e99cc2b1-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:44:34.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rjdjz" for this suite.
Jan 28 12:44:40.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:44:40.996: INFO: namespace: e2e-tests-emptydir-rjdjz, resource: bindings, ignored listing per whitelist
Jan 28 12:44:40.998: INFO: namespace e2e-tests-emptydir-rjdjz deletion completed in 6.318685058s

• [SLOW TEST:17.068 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:44:40.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:44:41.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-cq7d5" to be "success or failure"
Jan 28 12:44:41.348: INFO: Pod "downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.577146ms
Jan 28 12:44:43.365: INFO: Pod "downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027943858s
Jan 28 12:44:45.376: INFO: Pod "downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038875899s
Jan 28 12:44:47.841: INFO: Pod "downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503377178s
Jan 28 12:44:49.931: INFO: Pod "downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.593167492s
STEP: Saw pod success
Jan 28 12:44:49.931: INFO: Pod "downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:44:49.947: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:44:50.079: INFO: Waiting for pod downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005 to disappear
Jan 28 12:44:50.094: INFO: Pod downwardapi-volume-f3d5491f-41cb-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:44:50.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cq7d5" for this suite.
Jan 28 12:44:58.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:44:58.216: INFO: namespace: e2e-tests-projected-cq7d5, resource: bindings, ignored listing per whitelist
Jan 28 12:44:58.301: INFO: namespace e2e-tests-projected-cq7d5 deletion completed in 8.200496111s

• [SLOW TEST:17.302 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:44:58.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 12:44:58.549: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:44:59.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-x96s2" for this suite.
Jan 28 12:45:05.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:45:06.195: INFO: namespace: e2e-tests-custom-resource-definition-x96s2, resource: bindings, ignored listing per whitelist
Jan 28 12:45:06.210: INFO: namespace e2e-tests-custom-resource-definition-x96s2 deletion completed in 6.477589521s

• [SLOW TEST:7.908 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:45:06.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 12:45:06.470: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 28 12:45:06.554: INFO: Number of nodes with available pods: 0
Jan 28 12:45:06.555: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 28 12:45:06.753: INFO: Number of nodes with available pods: 0
Jan 28 12:45:06.754: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:08.168: INFO: Number of nodes with available pods: 0
Jan 28 12:45:08.168: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:08.788: INFO: Number of nodes with available pods: 0
Jan 28 12:45:08.789: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:09.781: INFO: Number of nodes with available pods: 0
Jan 28 12:45:09.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:10.787: INFO: Number of nodes with available pods: 0
Jan 28 12:45:10.787: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:12.341: INFO: Number of nodes with available pods: 0
Jan 28 12:45:12.341: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:12.769: INFO: Number of nodes with available pods: 0
Jan 28 12:45:12.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:13.802: INFO: Number of nodes with available pods: 0
Jan 28 12:45:13.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:14.775: INFO: Number of nodes with available pods: 0
Jan 28 12:45:14.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:15.830: INFO: Number of nodes with available pods: 1
Jan 28 12:45:15.831: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 28 12:45:15.916: INFO: Number of nodes with available pods: 1
Jan 28 12:45:15.916: INFO: Number of running nodes: 0, number of available pods: 1
Jan 28 12:45:16.931: INFO: Number of nodes with available pods: 0
Jan 28 12:45:16.931: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 28 12:45:16.979: INFO: Number of nodes with available pods: 0
Jan 28 12:45:16.979: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:17.990: INFO: Number of nodes with available pods: 0
Jan 28 12:45:17.990: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:18.996: INFO: Number of nodes with available pods: 0
Jan 28 12:45:18.996: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:20.226: INFO: Number of nodes with available pods: 0
Jan 28 12:45:20.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:20.999: INFO: Number of nodes with available pods: 0
Jan 28 12:45:20.999: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:22.019: INFO: Number of nodes with available pods: 0
Jan 28 12:45:22.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:23.087: INFO: Number of nodes with available pods: 0
Jan 28 12:45:23.088: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:23.996: INFO: Number of nodes with available pods: 0
Jan 28 12:45:23.996: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:24.998: INFO: Number of nodes with available pods: 0
Jan 28 12:45:24.998: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:25.991: INFO: Number of nodes with available pods: 0
Jan 28 12:45:25.991: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:26.993: INFO: Number of nodes with available pods: 0
Jan 28 12:45:26.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:28.766: INFO: Number of nodes with available pods: 0
Jan 28 12:45:28.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:29.033: INFO: Number of nodes with available pods: 0
Jan 28 12:45:29.033: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:29.994: INFO: Number of nodes with available pods: 0
Jan 28 12:45:29.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:31.048: INFO: Number of nodes with available pods: 0
Jan 28 12:45:31.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 28 12:45:32.008: INFO: Number of nodes with available pods: 1
Jan 28 12:45:32.008: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-n7zx5, will wait for the garbage collector to delete the pods
Jan 28 12:45:32.176: INFO: Deleting DaemonSet.extensions daemon-set took: 86.022612ms
Jan 28 12:45:32.276: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.741333ms
Jan 28 12:45:39.243: INFO: Number of nodes with available pods: 0
Jan 28 12:45:39.243: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 12:45:39.251: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-n7zx5/daemonsets","resourceVersion":"19748093"},"items":null}

Jan 28 12:45:39.259: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-n7zx5/pods","resourceVersion":"19748093"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:45:39.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-n7zx5" for this suite.
Jan 28 12:45:47.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:45:47.685: INFO: namespace: e2e-tests-daemonsets-n7zx5, resource: bindings, ignored listing per whitelist
Jan 28 12:45:47.888: INFO: namespace e2e-tests-daemonsets-n7zx5 deletion completed in 8.43030101s

• [SLOW TEST:41.678 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:45:47.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-58ts2
Jan 28 12:45:56.436: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-58ts2
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 12:45:56.443: INFO: Initial restart count of pod liveness-exec is 0
Jan 28 12:46:55.414: INFO: Restart count of pod e2e-tests-container-probe-58ts2/liveness-exec is now 1 (58.970739147s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:46:55.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-58ts2" for this suite.
Jan 28 12:47:03.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:47:03.988: INFO: namespace: e2e-tests-container-probe-58ts2, resource: bindings, ignored listing per whitelist
Jan 28 12:47:04.019: INFO: namespace e2e-tests-container-probe-58ts2 deletion completed in 8.389920037s

• [SLOW TEST:76.130 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:47:04.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-bj9kj.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-bj9kj.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 12:47:18.611: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.661: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.695: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.710: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.715: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.719: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.723: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.737: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.747: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.752: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.759: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.762: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.766: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.769: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.773: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.776: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.779: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.783: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.787: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.791: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005: the server could not find the requested resource (get pods dns-test-491c3640-41cc-11ea-a04a-0242ac110005)
Jan 28 12:47:18.791: INFO: Lookups using e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-bj9kj.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 28 12:47:23.969: INFO: DNS probes using e2e-tests-dns-bj9kj/dns-test-491c3640-41cc-11ea-a04a-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:47:24.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-bj9kj" for this suite.
Jan 28 12:47:32.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:47:32.548: INFO: namespace: e2e-tests-dns-bj9kj, resource: bindings, ignored listing per whitelist
Jan 28 12:47:32.761: INFO: namespace e2e-tests-dns-bj9kj deletion completed in 8.48106438s

• [SLOW TEST:28.742 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:47:32.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 28 12:47:32.956: INFO: Waiting up to 5m0s for pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-825kj" to be "success or failure"
Jan 28 12:47:33.000: INFO: Pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.920107ms
Jan 28 12:47:35.339: INFO: Pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382378891s
Jan 28 12:47:37.354: INFO: Pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397085588s
Jan 28 12:47:39.681: INFO: Pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.724212802s
Jan 28 12:47:41.704: INFO: Pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.747529228s
Jan 28 12:47:43.716: INFO: Pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.759675483s
STEP: Saw pod success
Jan 28 12:47:43.717: INFO: Pod "pod-5a1ded4c-41cc-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:47:43.721: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5a1ded4c-41cc-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:47:44.382: INFO: Waiting for pod pod-5a1ded4c-41cc-11ea-a04a-0242ac110005 to disappear
Jan 28 12:47:44.419: INFO: Pod pod-5a1ded4c-41cc-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:47:44.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-825kj" for this suite.
Jan 28 12:47:50.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:47:50.600: INFO: namespace: e2e-tests-emptydir-825kj, resource: bindings, ignored listing per whitelist
Jan 28 12:47:50.773: INFO: namespace e2e-tests-emptydir-825kj deletion completed in 6.337930712s

• [SLOW TEST:18.012 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:47:50.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:47:51.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-b7fpm" for this suite.
Jan 28 12:47:57.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:47:57.422: INFO: namespace: e2e-tests-kubelet-test-b7fpm, resource: bindings, ignored listing per whitelist
Jan 28 12:47:57.430: INFO: namespace e2e-tests-kubelet-test-b7fpm deletion completed in 6.266926521s

• [SLOW TEST:6.656 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:47:57.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 28 12:47:57.616: INFO: namespace e2e-tests-kubectl-b9pjk
Jan 28 12:47:57.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b9pjk'
Jan 28 12:48:00.231: INFO: stderr: ""
Jan 28 12:48:00.232: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 28 12:48:01.253: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:01.253: INFO: Found 0 / 1
Jan 28 12:48:02.654: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:02.654: INFO: Found 0 / 1
Jan 28 12:48:03.248: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:03.248: INFO: Found 0 / 1
Jan 28 12:48:04.269: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:04.269: INFO: Found 0 / 1
Jan 28 12:48:05.266: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:05.267: INFO: Found 0 / 1
Jan 28 12:48:06.495: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:06.495: INFO: Found 0 / 1
Jan 28 12:48:07.261: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:07.261: INFO: Found 0 / 1
Jan 28 12:48:08.397: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:08.397: INFO: Found 0 / 1
Jan 28 12:48:09.338: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:09.338: INFO: Found 0 / 1
Jan 28 12:48:10.299: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:10.300: INFO: Found 1 / 1
Jan 28 12:48:10.300: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 28 12:48:10.307: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 12:48:10.307: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 28 12:48:10.307: INFO: wait on redis-master startup in e2e-tests-kubectl-b9pjk 
Jan 28 12:48:10.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bkjxl redis-master --namespace=e2e-tests-kubectl-b9pjk'
Jan 28 12:48:10.561: INFO: stderr: ""
Jan 28 12:48:10.562: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Jan 12:48:08.827 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Jan 12:48:08.827 # Server started, Redis version 3.2.12\n1:M 28 Jan 12:48:08.827 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Jan 12:48:08.827 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 28 12:48:10.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-b9pjk'
Jan 28 12:48:11.011: INFO: stderr: ""
Jan 28 12:48:11.012: INFO: stdout: "service/rm2 exposed\n"
Jan 28 12:48:11.088: INFO: Service rm2 in namespace e2e-tests-kubectl-b9pjk found.
STEP: exposing service
Jan 28 12:48:13.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-b9pjk'
Jan 28 12:48:13.476: INFO: stderr: ""
Jan 28 12:48:13.476: INFO: stdout: "service/rm3 exposed\n"
Jan 28 12:48:13.523: INFO: Service rm3 in namespace e2e-tests-kubectl-b9pjk found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:48:15.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b9pjk" for this suite.
Jan 28 12:48:39.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:48:39.733: INFO: namespace: e2e-tests-kubectl-b9pjk, resource: bindings, ignored listing per whitelist
Jan 28 12:48:39.830: INFO: namespace e2e-tests-kubectl-b9pjk deletion completed in 24.265133217s

• [SLOW TEST:42.399 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:48:39.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ws8dt
Jan 28 12:48:50.099: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ws8dt
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 12:48:50.107: INFO: Initial restart count of pod liveness-http is 0
Jan 28 12:49:12.980: INFO: Restart count of pod e2e-tests-container-probe-ws8dt/liveness-http is now 1 (22.87279786s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:49:13.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ws8dt" for this suite.
Jan 28 12:49:19.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:49:19.372: INFO: namespace: e2e-tests-container-probe-ws8dt, resource: bindings, ignored listing per whitelist
Jan 28 12:49:19.385: INFO: namespace e2e-tests-container-probe-ws8dt deletion completed in 6.318488412s

• [SLOW TEST:39.555 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:49:19.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 28 12:49:19.686: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix168278183/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:49:19.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lntlp" for this suite.
Jan 28 12:49:25.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:49:26.027: INFO: namespace: e2e-tests-kubectl-lntlp, resource: bindings, ignored listing per whitelist
Jan 28 12:49:26.058: INFO: namespace e2e-tests-kubectl-lntlp deletion completed in 6.175601472s

• [SLOW TEST:6.672 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:49:26.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-9db2f3a1-41cc-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 28 12:49:26.561: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-98mbq" to be "success or failure"
Jan 28 12:49:26.737: INFO: Pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 175.279093ms
Jan 28 12:49:28.752: INFO: Pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19063686s
Jan 28 12:49:30.766: INFO: Pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204233121s
Jan 28 12:49:32.785: INFO: Pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223506569s
Jan 28 12:49:34.824: INFO: Pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262413252s
Jan 28 12:49:36.866: INFO: Pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.304823855s
STEP: Saw pod success
Jan 28 12:49:36.867: INFO: Pod "pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:49:36.935: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 12:49:37.249: INFO: Waiting for pod pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005 to disappear
Jan 28 12:49:37.445: INFO: Pod pod-projected-configmaps-9db5d230-41cc-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:49:37.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-98mbq" for this suite.
Jan 28 12:49:43.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:49:43.650: INFO: namespace: e2e-tests-projected-98mbq, resource: bindings, ignored listing per whitelist
Jan 28 12:49:43.740: INFO: namespace e2e-tests-projected-98mbq deletion completed in 6.281428794s

• [SLOW TEST:17.682 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:49:43.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0128 12:50:26.195259       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 12:50:26.195: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:50:26.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cm9rh" for this suite.
Jan 28 12:50:34.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:50:37.360: INFO: namespace: e2e-tests-gc-cm9rh, resource: bindings, ignored listing per whitelist
Jan 28 12:50:37.450: INFO: namespace e2e-tests-gc-cm9rh deletion completed in 11.24807586s

• [SLOW TEST:53.709 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:50:37.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 28 12:50:58.637: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c8734627-41cc-11ea-a04a-0242ac110005,GenerateName:,Namespace:e2e-tests-events-6ss9b,SelfLink:/api/v1/namespaces/e2e-tests-events-6ss9b/pods/send-events-c8734627-41cc-11ea-a04a-0242ac110005,UID:c88ef72e-41cc-11ea-a994-fa163e34d433,ResourceVersion:19748896,Generation:0,CreationTimestamp:2020-01-28 12:50:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 35398142,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xljjz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xljjz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-xljjz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00161c660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00161c680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:50:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:50:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:50:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:50:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-28 12:50:38 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-28 12:50:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://180fda9189982bc861f3ef16dc22e68eea1cfdd490e747b14b90633a80f7d4b9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 28 12:51:00.655: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 28 12:51:02.671: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:51:02.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-6ss9b" for this suite.
Jan 28 12:51:42.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:51:43.023: INFO: namespace: e2e-tests-events-6ss9b, resource: bindings, ignored listing per whitelist
Jan 28 12:51:43.078: INFO: namespace e2e-tests-events-6ss9b deletion completed in 40.336986917s

• [SLOW TEST:65.626 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:51:43.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 28 12:51:53.289: INFO: Pod pod-hostip-ef479619-41cc-11ea-a04a-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:51:53.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bzjb4" for this suite.
Jan 28 12:52:17.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:52:17.433: INFO: namespace: e2e-tests-pods-bzjb4, resource: bindings, ignored listing per whitelist
Jan 28 12:52:17.544: INFO: namespace e2e-tests-pods-bzjb4 deletion completed in 24.246458793s

• [SLOW TEST:34.466 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:52:17.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-03fb43d0-41cd-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 28 12:52:17.939: INFO: Waiting up to 5m0s for pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-6qflz" to be "success or failure"
Jan 28 12:52:17.951: INFO: Pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.644604ms
Jan 28 12:52:19.979: INFO: Pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039564589s
Jan 28 12:52:21.993: INFO: Pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053674434s
Jan 28 12:52:24.240: INFO: Pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300543336s
Jan 28 12:52:26.403: INFO: Pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463581455s
Jan 28 12:52:28.419: INFO: Pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.47925019s
STEP: Saw pod success
Jan 28 12:52:28.419: INFO: Pod "pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:52:28.426: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 28 12:52:29.125: INFO: Waiting for pod pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:52:29.674: INFO: Pod pod-configmaps-03fd2db3-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:52:29.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6qflz" for this suite.
Jan 28 12:52:36.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:52:36.361: INFO: namespace: e2e-tests-configmap-6qflz, resource: bindings, ignored listing per whitelist
Jan 28 12:52:36.361: INFO: namespace e2e-tests-configmap-6qflz deletion completed in 6.652145263s

• [SLOW TEST:18.816 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:52:36.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 28 12:52:36.625: INFO: Waiting up to 5m0s for pod "pod-0f209353-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-4mz8m" to be "success or failure"
Jan 28 12:52:36.632: INFO: Pod "pod-0f209353-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.402186ms
Jan 28 12:52:38.667: INFO: Pod "pod-0f209353-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04149873s
Jan 28 12:52:40.683: INFO: Pod "pod-0f209353-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058029383s
Jan 28 12:52:42.741: INFO: Pod "pod-0f209353-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115436316s
Jan 28 12:52:44.751: INFO: Pod "pod-0f209353-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125833205s
Jan 28 12:52:46.763: INFO: Pod "pod-0f209353-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13755337s
STEP: Saw pod success
Jan 28 12:52:46.763: INFO: Pod "pod-0f209353-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:52:46.767: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0f209353-41cd-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:52:47.638: INFO: Waiting for pod pod-0f209353-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:52:47.969: INFO: Pod pod-0f209353-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:52:47.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4mz8m" for this suite.
Jan 28 12:52:54.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:52:54.165: INFO: namespace: e2e-tests-emptydir-4mz8m, resource: bindings, ignored listing per whitelist
Jan 28 12:52:54.166: INFO: namespace e2e-tests-emptydir-4mz8m deletion completed in 6.175391096s

• [SLOW TEST:17.803 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:52:54.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 28 12:52:54.315: INFO: Waiting up to 5m0s for pod "client-containers-19abbac0-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-containers-p67r2" to be "success or failure"
Jan 28 12:52:54.325: INFO: Pod "client-containers-19abbac0-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.354078ms
Jan 28 12:52:56.344: INFO: Pod "client-containers-19abbac0-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029730815s
Jan 28 12:52:59.028: INFO: Pod "client-containers-19abbac0-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71298898s
Jan 28 12:53:01.056: INFO: Pod "client-containers-19abbac0-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.740941203s
Jan 28 12:53:03.084: INFO: Pod "client-containers-19abbac0-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.769635835s
STEP: Saw pod success
Jan 28 12:53:03.085: INFO: Pod "client-containers-19abbac0-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:53:03.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-19abbac0-41cd-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:53:03.175: INFO: Waiting for pod client-containers-19abbac0-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:53:03.187: INFO: Pod client-containers-19abbac0-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:53:03.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-p67r2" for this suite.
Jan 28 12:53:09.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:53:09.413: INFO: namespace: e2e-tests-containers-p67r2, resource: bindings, ignored listing per whitelist
Jan 28 12:53:09.557: INFO: namespace e2e-tests-containers-p67r2 deletion completed in 6.240728216s

• [SLOW TEST:15.391 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:53:09.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 28 12:53:09.947: INFO: Waiting up to 5m0s for pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-nsfkc" to be "success or failure"
Jan 28 12:53:10.131: INFO: Pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 183.575461ms
Jan 28 12:53:12.172: INFO: Pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224567016s
Jan 28 12:53:14.190: INFO: Pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242842502s
Jan 28 12:53:16.437: INFO: Pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489527096s
Jan 28 12:53:18.458: INFO: Pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510819843s
Jan 28 12:53:20.532: INFO: Pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.585096838s
STEP: Saw pod success
Jan 28 12:53:20.533: INFO: Pod "pod-22f9afc2-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:53:20.560: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-22f9afc2-41cd-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:53:21.026: INFO: Waiting for pod pod-22f9afc2-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:53:21.054: INFO: Pod pod-22f9afc2-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:53:21.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nsfkc" for this suite.
Jan 28 12:53:27.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:53:27.247: INFO: namespace: e2e-tests-emptydir-nsfkc, resource: bindings, ignored listing per whitelist
Jan 28 12:53:27.274: INFO: namespace e2e-tests-emptydir-nsfkc deletion completed in 6.209079639s

• [SLOW TEST:17.716 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:53:27.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 28 12:53:27.475: INFO: Waiting up to 5m0s for pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-zpk7h" to be "success or failure"
Jan 28 12:53:27.491: INFO: Pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.186578ms
Jan 28 12:53:29.504: INFO: Pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02841237s
Jan 28 12:53:31.525: INFO: Pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049512125s
Jan 28 12:53:33.649: INFO: Pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173497457s
Jan 28 12:53:35.659: INFO: Pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182957831s
Jan 28 12:53:37.699: INFO: Pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.223192644s
STEP: Saw pod success
Jan 28 12:53:37.699: INFO: Pod "pod-2d6f5255-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:53:37.706: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2d6f5255-41cd-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:53:38.028: INFO: Waiting for pod pod-2d6f5255-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:53:38.038: INFO: Pod pod-2d6f5255-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:53:38.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zpk7h" for this suite.
Jan 28 12:53:44.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:53:44.138: INFO: namespace: e2e-tests-emptydir-zpk7h, resource: bindings, ignored listing per whitelist
Jan 28 12:53:44.241: INFO: namespace e2e-tests-emptydir-zpk7h deletion completed in 6.194736154s

• [SLOW TEST:16.967 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:53:44.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 28 12:53:44.426: INFO: Waiting up to 5m0s for pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-containers-jgl2k" to be "success or failure"
Jan 28 12:53:44.459: INFO: Pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.133813ms
Jan 28 12:53:46.692: INFO: Pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265941963s
Jan 28 12:53:48.705: INFO: Pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278962971s
Jan 28 12:53:50.885: INFO: Pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458693277s
Jan 28 12:53:53.029: INFO: Pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.602235204s
Jan 28 12:53:55.442: INFO: Pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.015512124s
STEP: Saw pod success
Jan 28 12:53:55.443: INFO: Pod "client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:53:55.459: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:53:55.970: INFO: Waiting for pod client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:53:55.987: INFO: Pod client-containers-378a5ae6-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:53:55.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-jgl2k" for this suite.
Jan 28 12:54:02.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:54:02.099: INFO: namespace: e2e-tests-containers-jgl2k, resource: bindings, ignored listing per whitelist
Jan 28 12:54:02.224: INFO: namespace e2e-tests-containers-jgl2k deletion completed in 6.21815904s

• [SLOW TEST:17.983 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:54:02.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 28 12:54:02.605: INFO: Waiting up to 5m0s for pod "pod-42525192-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-bsx4f" to be "success or failure"
Jan 28 12:54:02.621: INFO: Pod "pod-42525192-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.336768ms
Jan 28 12:54:04.638: INFO: Pod "pod-42525192-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032740436s
Jan 28 12:54:06.654: INFO: Pod "pod-42525192-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04858012s
Jan 28 12:54:08.822: INFO: Pod "pod-42525192-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216054495s
Jan 28 12:54:10.837: INFO: Pod "pod-42525192-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231173418s
Jan 28 12:54:12.877: INFO: Pod "pod-42525192-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.271718681s
STEP: Saw pod success
Jan 28 12:54:12.878: INFO: Pod "pod-42525192-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:54:12.925: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-42525192-41cd-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:54:13.219: INFO: Waiting for pod pod-42525192-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:54:13.353: INFO: Pod pod-42525192-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:54:13.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bsx4f" for this suite.
Jan 28 12:54:19.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:54:19.669: INFO: namespace: e2e-tests-emptydir-bsx4f, resource: bindings, ignored listing per whitelist
Jan 28 12:54:19.699: INFO: namespace e2e-tests-emptydir-bsx4f deletion completed in 6.323019838s

• [SLOW TEST:17.474 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:54:19.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 28 12:54:20.090: INFO: Waiting up to 5m0s for pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-n8k24" to be "success or failure"
Jan 28 12:54:20.310: INFO: Pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 219.000259ms
Jan 28 12:54:22.341: INFO: Pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250635725s
Jan 28 12:54:24.362: INFO: Pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271296398s
Jan 28 12:54:27.188: INFO: Pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.097859208s
Jan 28 12:54:29.213: INFO: Pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.122676211s
Jan 28 12:54:31.242: INFO: Pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.151622583s
STEP: Saw pod success
Jan 28 12:54:31.243: INFO: Pod "pod-4cc98c1e-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:54:31.271: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4cc98c1e-41cd-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 12:54:31.401: INFO: Waiting for pod pod-4cc98c1e-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:54:31.415: INFO: Pod pod-4cc98c1e-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:54:31.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-n8k24" for this suite.
Jan 28 12:54:39.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:54:39.625: INFO: namespace: e2e-tests-emptydir-n8k24, resource: bindings, ignored listing per whitelist
Jan 28 12:54:39.704: INFO: namespace e2e-tests-emptydir-n8k24 deletion completed in 8.281469117s

• [SLOW TEST:20.004 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:54:39.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-58b26f1a-41cd-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 28 12:54:40.067: INFO: Waiting up to 5m0s for pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-configmap-sdmcv" to be "success or failure"
Jan 28 12:54:40.079: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.561493ms
Jan 28 12:54:42.654: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586808363s
Jan 28 12:54:47.153: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.085960743s
Jan 28 12:54:49.174: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.107279829s
Jan 28 12:54:51.194: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.127259026s
Jan 28 12:54:54.506: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.439400109s
Jan 28 12:54:56.585: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.51817376s
Jan 28 12:54:58.619: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.552696455s
Jan 28 12:55:00.702: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.635247119s
Jan 28 12:55:02.742: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.675637159s
STEP: Saw pod success
Jan 28 12:55:02.743: INFO: Pod "pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:55:02.753: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 28 12:55:05.347: INFO: Waiting for pod pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:55:05.464: INFO: Pod pod-configmaps-58b429a7-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:55:05.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sdmcv" for this suite.
Jan 28 12:55:13.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:55:14.008: INFO: namespace: e2e-tests-configmap-sdmcv, resource: bindings, ignored listing per whitelist
Jan 28 12:55:14.082: INFO: namespace e2e-tests-configmap-sdmcv deletion completed in 8.543265452s

• [SLOW TEST:34.378 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:55:14.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0128 12:55:25.709786       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 12:55:25.710: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:55:25.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4tmn6" for this suite.
Jan 28 12:55:32.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:55:32.922: INFO: namespace: e2e-tests-gc-4tmn6, resource: bindings, ignored listing per whitelist
Jan 28 12:55:32.959: INFO: namespace e2e-tests-gc-4tmn6 deletion completed in 7.214714193s

• [SLOW TEST:18.876 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:55:32.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:55:45.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-gnmr8" for this suite.
Jan 28 12:56:33.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:56:33.532: INFO: namespace: e2e-tests-kubelet-test-gnmr8, resource: bindings, ignored listing per whitelist
Jan 28 12:56:33.678: INFO: namespace e2e-tests-kubelet-test-gnmr8 deletion completed in 48.457157142s

• [SLOW TEST:60.718 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:56:33.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:56:34.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005" in namespace "e2e-tests-downward-api-fc9dl" to be "success or failure"
Jan 28 12:56:34.215: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.182526ms
Jan 28 12:56:36.809: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.631412739s
Jan 28 12:56:38.841: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663843368s
Jan 28 12:56:40.873: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695720236s
Jan 28 12:56:43.706: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.52895206s
Jan 28 12:56:45.727: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.549436994s
Jan 28 12:56:47.745: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 13.567958291s
Jan 28 12:56:49.766: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.589251008s
STEP: Saw pod success
Jan 28 12:56:49.767: INFO: Pod "downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:56:49.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:56:50.987: INFO: Waiting for pod downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005 to disappear
Jan 28 12:56:51.115: INFO: Pod downwardapi-volume-9c9c2385-41cd-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:56:51.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fc9dl" for this suite.
Jan 28 12:56:57.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:56:57.319: INFO: namespace: e2e-tests-downward-api-fc9dl, resource: bindings, ignored listing per whitelist
Jan 28 12:56:57.411: INFO: namespace e2e-tests-downward-api-fc9dl deletion completed in 6.280017253s

• [SLOW TEST:23.732 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:56:57.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 12:56:57.677: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 28 12:56:57.707: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 28 12:57:03.546: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 12:57:07.580: INFO: Creating deployment "test-rolling-update-deployment"
Jan 28 12:57:07.615: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 28 12:57:07.701: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 28 12:57:10.563: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 28 12:57:10.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813028, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:57:12.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813028, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:57:14.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813028, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:57:16.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813028, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:57:18.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813038, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715813027, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 12:57:20.603: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 28 12:57:20.620: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-75q7l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-75q7l/deployments/test-rolling-update-deployment,UID:b0a56175-41cd-11ea-a994-fa163e34d433,ResourceVersion:19749712,Generation:1,CreationTimestamp:2020-01-28 12:57:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-28 12:57:07 +0000 UTC 2020-01-28 12:57:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-28 12:57:18 +0000 UTC 2020-01-28 12:57:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 28 12:57:20.624: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-75q7l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-75q7l/replicasets/test-rolling-update-deployment-75db98fb4c,UID:b0b7e770-41cd-11ea-a994-fa163e34d433,ResourceVersion:19749703,Generation:1,CreationTimestamp:2020-01-28 12:57:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b0a56175-41cd-11ea-a994-fa163e34d433 0xc002745d07 0xc002745d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 28 12:57:20.624: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 28 12:57:20.625: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-75q7l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-75q7l/replicasets/test-rolling-update-controller,UID:aabc36c6-41cd-11ea-a994-fa163e34d433,ResourceVersion:19749711,Generation:2,CreationTimestamp:2020-01-28 12:56:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b0a56175-41cd-11ea-a994-fa163e34d433 0xc002745c47 0xc002745c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 12:57:20.632: INFO: Pod "test-rolling-update-deployment-75db98fb4c-qc4t6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-qc4t6,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-75q7l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-75q7l/pods/test-rolling-update-deployment-75db98fb4c-qc4t6,UID:b0d963bf-41cd-11ea-a994-fa163e34d433,ResourceVersion:19749702,Generation:0,CreationTimestamp:2020-01-28 12:57:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c b0b7e770-41cd-11ea-a994-fa163e34d433 0xc0024201f7 0xc0024201f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7pfks {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7pfks,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-7pfks true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002420300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002420320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:57:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:57:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:57:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 12:57:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-28 12:57:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-28 12:57:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://b5d307c4b0088f4e1b8b1eef236c7d3ff32305a7d14357b5cfda710de8a2997d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:57:20.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-75q7l" for this suite.
Jan 28 12:57:28.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:57:28.935: INFO: namespace: e2e-tests-deployment-75q7l, resource: bindings, ignored listing per whitelist
Jan 28 12:57:28.967: INFO: namespace e2e-tests-deployment-75q7l deletion completed in 8.329977245s

• [SLOW TEST:31.556 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:57:28.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-c89v6
I0128 12:57:29.562739       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-c89v6, replica count: 1
I0128 12:57:30.614142       8 runners.go:184] svc-latency-rc Pods: 0 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:31.614844       8 runners.go:184] svc-latency-rc Pods: 0 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:32.615371       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:33.616045       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:34.617428       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:35.618296       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:36.618743       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:37.619301       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:38.620410       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:39.621117       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:40.621627       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 12:57:41.623097       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 12:57:41.769: INFO: Created: latency-svc-7rtk9
Jan 28 12:57:41.816: INFO: Got endpoints: latency-svc-7rtk9 [91.818623ms]
Jan 28 12:57:41.972: INFO: Created: latency-svc-2m9cp
Jan 28 12:57:42.004: INFO: Got endpoints: latency-svc-2m9cp [186.463784ms]
Jan 28 12:57:42.086: INFO: Created: latency-svc-nqxws
Jan 28 12:57:42.203: INFO: Got endpoints: latency-svc-nqxws [386.022518ms]
Jan 28 12:57:42.246: INFO: Created: latency-svc-td4qc
Jan 28 12:57:42.276: INFO: Got endpoints: latency-svc-td4qc [458.40604ms]
Jan 28 12:57:42.402: INFO: Created: latency-svc-dnn4b
Jan 28 12:57:42.431: INFO: Got endpoints: latency-svc-dnn4b [227.66415ms]
Jan 28 12:57:42.636: INFO: Created: latency-svc-hggkb
Jan 28 12:57:42.657: INFO: Got endpoints: latency-svc-hggkb [839.269684ms]
Jan 28 12:57:42.857: INFO: Created: latency-svc-96wfj
Jan 28 12:57:42.883: INFO: Got endpoints: latency-svc-96wfj [1.065566916s]
Jan 28 12:57:43.088: INFO: Created: latency-svc-tc5w5
Jan 28 12:57:43.094: INFO: Got endpoints: latency-svc-tc5w5 [1.276931396s]
Jan 28 12:57:43.139: INFO: Created: latency-svc-p2xtl
Jan 28 12:57:43.356: INFO: Got endpoints: latency-svc-p2xtl [1.538803931s]
Jan 28 12:57:43.473: INFO: Created: latency-svc-8kbss
Jan 28 12:57:43.565: INFO: Got endpoints: latency-svc-8kbss [1.746463451s]
Jan 28 12:57:43.600: INFO: Created: latency-svc-f99qm
Jan 28 12:57:43.638: INFO: Got endpoints: latency-svc-f99qm [1.819209579s]
Jan 28 12:57:43.765: INFO: Created: latency-svc-cx259
Jan 28 12:57:43.836: INFO: Got endpoints: latency-svc-cx259 [2.017356371s]
Jan 28 12:57:43.841: INFO: Created: latency-svc-fjfrw
Jan 28 12:57:44.018: INFO: Got endpoints: latency-svc-fjfrw [2.199751565s]
Jan 28 12:57:44.067: INFO: Created: latency-svc-vmwcs
Jan 28 12:57:44.276: INFO: Got endpoints: latency-svc-vmwcs [2.457602346s]
Jan 28 12:57:44.297: INFO: Created: latency-svc-svnkd
Jan 28 12:57:44.321: INFO: Got endpoints: latency-svc-svnkd [2.502092385s]
Jan 28 12:57:44.520: INFO: Created: latency-svc-492bl
Jan 28 12:57:44.567: INFO: Got endpoints: latency-svc-492bl [2.748805405s]
Jan 28 12:57:44.702: INFO: Created: latency-svc-7qwlp
Jan 28 12:57:44.752: INFO: Got endpoints: latency-svc-7qwlp [2.933557299s]
Jan 28 12:57:44.901: INFO: Created: latency-svc-8nm6g
Jan 28 12:57:44.959: INFO: Got endpoints: latency-svc-8nm6g [2.954917916s]
Jan 28 12:57:44.967: INFO: Created: latency-svc-xkdh5
Jan 28 12:57:44.968: INFO: Got endpoints: latency-svc-xkdh5 [2.692034095s]
Jan 28 12:57:45.173: INFO: Created: latency-svc-rz7hs
Jan 28 12:57:45.186: INFO: Got endpoints: latency-svc-rz7hs [2.755260466s]
Jan 28 12:57:45.248: INFO: Created: latency-svc-pqz24
Jan 28 12:57:45.401: INFO: Got endpoints: latency-svc-pqz24 [2.743638183s]
Jan 28 12:57:45.442: INFO: Created: latency-svc-crhbz
Jan 28 12:57:45.466: INFO: Got endpoints: latency-svc-crhbz [2.582012679s]
Jan 28 12:57:45.572: INFO: Created: latency-svc-xswtc
Jan 28 12:57:45.584: INFO: Got endpoints: latency-svc-xswtc [2.490115959s]
Jan 28 12:57:45.677: INFO: Created: latency-svc-2zswd
Jan 28 12:57:45.811: INFO: Got endpoints: latency-svc-2zswd [2.453599768s]
Jan 28 12:57:45.843: INFO: Created: latency-svc-k5rs5
Jan 28 12:57:45.869: INFO: Got endpoints: latency-svc-k5rs5 [2.303063126s]
Jan 28 12:57:46.007: INFO: Created: latency-svc-gblwm
Jan 28 12:57:46.027: INFO: Got endpoints: latency-svc-gblwm [2.388892829s]
Jan 28 12:57:46.101: INFO: Created: latency-svc-vv4jb
Jan 28 12:57:46.263: INFO: Got endpoints: latency-svc-vv4jb [2.426209073s]
Jan 28 12:57:46.267: INFO: Created: latency-svc-r68qh
Jan 28 12:57:46.337: INFO: Got endpoints: latency-svc-r68qh [2.31798705s]
Jan 28 12:57:46.458: INFO: Created: latency-svc-47qlj
Jan 28 12:57:46.540: INFO: Got endpoints: latency-svc-47qlj [2.263530296s]
Jan 28 12:57:46.672: INFO: Created: latency-svc-scfm6
Jan 28 12:57:46.740: INFO: Created: latency-svc-585bk
Jan 28 12:57:46.754: INFO: Got endpoints: latency-svc-scfm6 [2.432636315s]
Jan 28 12:57:46.936: INFO: Got endpoints: latency-svc-585bk [2.367809669s]
Jan 28 12:57:46.977: INFO: Created: latency-svc-8qd4t
Jan 28 12:57:46.986: INFO: Got endpoints: latency-svc-8qd4t [2.233202874s]
Jan 28 12:57:47.138: INFO: Created: latency-svc-9pzxj
Jan 28 12:57:47.172: INFO: Got endpoints: latency-svc-9pzxj [2.212877075s]
Jan 28 12:57:47.372: INFO: Created: latency-svc-q9s4t
Jan 28 12:57:47.392: INFO: Got endpoints: latency-svc-q9s4t [2.423786343s]
Jan 28 12:57:47.629: INFO: Created: latency-svc-rkz8t
Jan 28 12:57:47.633: INFO: Got endpoints: latency-svc-rkz8t [2.446193397s]
Jan 28 12:57:47.833: INFO: Created: latency-svc-8hrrp
Jan 28 12:57:47.833: INFO: Got endpoints: latency-svc-8hrrp [2.432157701s]
Jan 28 12:57:48.242: INFO: Created: latency-svc-hzjff
Jan 28 12:57:48.258: INFO: Got endpoints: latency-svc-hzjff [2.792363422s]
Jan 28 12:57:48.550: INFO: Created: latency-svc-qztnx
Jan 28 12:57:48.550: INFO: Got endpoints: latency-svc-qztnx [2.96557424s]
Jan 28 12:57:48.740: INFO: Created: latency-svc-w5x2f
Jan 28 12:57:48.770: INFO: Got endpoints: latency-svc-w5x2f [2.958553697s]
Jan 28 12:57:48.976: INFO: Created: latency-svc-vdzxh
Jan 28 12:57:48.993: INFO: Got endpoints: latency-svc-vdzxh [3.12359213s]
Jan 28 12:57:49.233: INFO: Created: latency-svc-9qq7b
Jan 28 12:57:49.264: INFO: Got endpoints: latency-svc-9qq7b [3.236180009s]
Jan 28 12:57:49.497: INFO: Created: latency-svc-b5rgm
Jan 28 12:57:49.522: INFO: Got endpoints: latency-svc-b5rgm [3.258721367s]
Jan 28 12:57:49.755: INFO: Created: latency-svc-vhblm
Jan 28 12:57:49.787: INFO: Got endpoints: latency-svc-vhblm [3.449941168s]
Jan 28 12:57:50.026: INFO: Created: latency-svc-2x6gn
Jan 28 12:57:50.047: INFO: Got endpoints: latency-svc-2x6gn [3.506583565s]
Jan 28 12:57:50.214: INFO: Created: latency-svc-dnr7m
Jan 28 12:57:50.236: INFO: Got endpoints: latency-svc-dnr7m [3.481336429s]
Jan 28 12:57:50.519: INFO: Created: latency-svc-5tffb
Jan 28 12:57:50.539: INFO: Got endpoints: latency-svc-5tffb [3.602395201s]
Jan 28 12:57:50.940: INFO: Created: latency-svc-9ll9h
Jan 28 12:57:50.946: INFO: Got endpoints: latency-svc-9ll9h [3.958974603s]
Jan 28 12:57:51.541: INFO: Created: latency-svc-9tgxm
Jan 28 12:57:51.544: INFO: Created: latency-svc-ckd2h
Jan 28 12:57:51.589: INFO: Got endpoints: latency-svc-9tgxm [4.416501229s]
Jan 28 12:57:51.612: INFO: Got endpoints: latency-svc-ckd2h [4.219689211s]
Jan 28 12:57:51.788: INFO: Created: latency-svc-84t5q
Jan 28 12:57:51.841: INFO: Got endpoints: latency-svc-84t5q [4.20834867s]
Jan 28 12:57:52.042: INFO: Created: latency-svc-mj4l5
Jan 28 12:57:52.358: INFO: Created: latency-svc-m5fhm
Jan 28 12:57:52.559: INFO: Got endpoints: latency-svc-mj4l5 [4.725117435s]
Jan 28 12:57:52.618: INFO: Created: latency-svc-vlxkg
Jan 28 12:57:52.636: INFO: Got endpoints: latency-svc-m5fhm [4.377351268s]
Jan 28 12:57:52.857: INFO: Got endpoints: latency-svc-vlxkg [4.306575682s]
Jan 28 12:57:52.876: INFO: Created: latency-svc-cbn4d
Jan 28 12:57:52.889: INFO: Got endpoints: latency-svc-cbn4d [4.118766478s]
Jan 28 12:57:53.273: INFO: Created: latency-svc-c27fp
Jan 28 12:57:53.308: INFO: Got endpoints: latency-svc-c27fp [4.314846257s]
Jan 28 12:57:53.369: INFO: Created: latency-svc-4jmjm
Jan 28 12:57:53.509: INFO: Got endpoints: latency-svc-4jmjm [4.245049882s]
Jan 28 12:57:53.537: INFO: Created: latency-svc-q2qmh
Jan 28 12:57:53.575: INFO: Got endpoints: latency-svc-q2qmh [4.052150943s]
Jan 28 12:57:53.839: INFO: Created: latency-svc-bh7h8
Jan 28 12:57:53.849: INFO: Got endpoints: latency-svc-bh7h8 [4.060814758s]
Jan 28 12:57:54.027: INFO: Created: latency-svc-7cw9p
Jan 28 12:57:54.042: INFO: Got endpoints: latency-svc-7cw9p [3.994862836s]
Jan 28 12:57:54.287: INFO: Created: latency-svc-hrlsz
Jan 28 12:57:54.525: INFO: Got endpoints: latency-svc-hrlsz [4.288664339s]
Jan 28 12:57:54.578: INFO: Created: latency-svc-b29zx
Jan 28 12:57:54.712: INFO: Got endpoints: latency-svc-b29zx [4.172253575s]
Jan 28 12:57:54.780: INFO: Created: latency-svc-dmkp6
Jan 28 12:57:54.811: INFO: Got endpoints: latency-svc-dmkp6 [3.865476439s]
Jan 28 12:57:55.012: INFO: Created: latency-svc-m7pxs
Jan 28 12:57:55.039: INFO: Got endpoints: latency-svc-m7pxs [3.447903961s]
Jan 28 12:57:55.319: INFO: Created: latency-svc-zqwhd
Jan 28 12:57:55.340: INFO: Got endpoints: latency-svc-zqwhd [3.727490901s]
Jan 28 12:57:55.531: INFO: Created: latency-svc-xrtz5
Jan 28 12:57:55.562: INFO: Got endpoints: latency-svc-xrtz5 [3.719956312s]
Jan 28 12:57:55.835: INFO: Created: latency-svc-55627
Jan 28 12:57:55.835: INFO: Got endpoints: latency-svc-55627 [3.27538056s]
Jan 28 12:57:56.022: INFO: Created: latency-svc-mwwqk
Jan 28 12:57:56.040: INFO: Got endpoints: latency-svc-mwwqk [3.403635868s]
Jan 28 12:57:56.236: INFO: Created: latency-svc-dtlx6
Jan 28 12:57:56.302: INFO: Got endpoints: latency-svc-dtlx6 [3.44406049s]
Jan 28 12:57:56.462: INFO: Created: latency-svc-d8mn6
Jan 28 12:57:56.491: INFO: Got endpoints: latency-svc-d8mn6 [3.601682578s]
Jan 28 12:57:56.777: INFO: Created: latency-svc-sr62k
Jan 28 12:57:56.840: INFO: Got endpoints: latency-svc-sr62k [3.531448693s]
Jan 28 12:57:56.841: INFO: Created: latency-svc-dbrss
Jan 28 12:57:56.936: INFO: Got endpoints: latency-svc-dbrss [3.426552305s]
Jan 28 12:57:56.966: INFO: Created: latency-svc-4f4vh
Jan 28 12:57:57.002: INFO: Got endpoints: latency-svc-4f4vh [3.425912624s]
Jan 28 12:57:57.246: INFO: Created: latency-svc-jfbm7
Jan 28 12:57:57.260: INFO: Got endpoints: latency-svc-jfbm7 [3.411142839s]
Jan 28 12:57:57.515: INFO: Created: latency-svc-kzz8t
Jan 28 12:57:57.535: INFO: Got endpoints: latency-svc-kzz8t [3.492039541s]
Jan 28 12:57:57.710: INFO: Created: latency-svc-x74wq
Jan 28 12:57:57.733: INFO: Got endpoints: latency-svc-x74wq [3.208390309s]
Jan 28 12:57:57.796: INFO: Created: latency-svc-wf56w
Jan 28 12:57:57.865: INFO: Got endpoints: latency-svc-wf56w [3.153302913s]
Jan 28 12:57:57.906: INFO: Created: latency-svc-gkjn9
Jan 28 12:57:57.940: INFO: Got endpoints: latency-svc-gkjn9 [3.128241624s]
Jan 28 12:57:58.126: INFO: Created: latency-svc-7klxs
Jan 28 12:57:58.146: INFO: Got endpoints: latency-svc-7klxs [3.107152264s]
Jan 28 12:57:58.320: INFO: Created: latency-svc-84ltz
Jan 28 12:57:58.334: INFO: Got endpoints: latency-svc-84ltz [2.993977874s]
Jan 28 12:57:58.499: INFO: Created: latency-svc-4nxnw
Jan 28 12:57:58.511: INFO: Got endpoints: latency-svc-4nxnw [2.948042718s]
Jan 28 12:57:58.764: INFO: Created: latency-svc-v2bcz
Jan 28 12:57:58.769: INFO: Got endpoints: latency-svc-v2bcz [2.933723022s]
Jan 28 12:57:58.939: INFO: Created: latency-svc-rwrxf
Jan 28 12:57:58.989: INFO: Got endpoints: latency-svc-rwrxf [2.94837108s]
Jan 28 12:57:59.168: INFO: Created: latency-svc-qr428
Jan 28 12:57:59.192: INFO: Got endpoints: latency-svc-qr428 [2.889786952s]
Jan 28 12:57:59.465: INFO: Created: latency-svc-bw4fx
Jan 28 12:57:59.582: INFO: Created: latency-svc-vczc2
Jan 28 12:57:59.591: INFO: Got endpoints: latency-svc-bw4fx [3.098873759s]
Jan 28 12:57:59.609: INFO: Got endpoints: latency-svc-vczc2 [2.768086648s]
Jan 28 12:57:59.767: INFO: Created: latency-svc-6h54m
Jan 28 12:57:59.811: INFO: Got endpoints: latency-svc-6h54m [2.874127757s]
Jan 28 12:58:00.012: INFO: Created: latency-svc-smk6r
Jan 28 12:58:00.048: INFO: Got endpoints: latency-svc-smk6r [3.045598934s]
Jan 28 12:58:00.260: INFO: Created: latency-svc-nm994
Jan 28 12:58:00.284: INFO: Got endpoints: latency-svc-nm994 [3.02349493s]
Jan 28 12:58:00.510: INFO: Created: latency-svc-smhfs
Jan 28 12:58:00.537: INFO: Created: latency-svc-s9lht
Jan 28 12:58:00.672: INFO: Got endpoints: latency-svc-smhfs [3.136657178s]
Jan 28 12:58:00.687: INFO: Got endpoints: latency-svc-s9lht [2.952892384s]
Jan 28 12:58:00.720: INFO: Created: latency-svc-h5vrx
Jan 28 12:58:00.867: INFO: Got endpoints: latency-svc-h5vrx [3.001344396s]
Jan 28 12:58:00.896: INFO: Created: latency-svc-ph9vc
Jan 28 12:58:00.909: INFO: Got endpoints: latency-svc-ph9vc [2.969064394s]
Jan 28 12:58:01.064: INFO: Created: latency-svc-54z2m
Jan 28 12:58:01.066: INFO: Got endpoints: latency-svc-54z2m [2.919557042s]
Jan 28 12:58:01.151: INFO: Created: latency-svc-kzzz6
Jan 28 12:58:01.345: INFO: Got endpoints: latency-svc-kzzz6 [3.010314761s]
Jan 28 12:58:01.586: INFO: Created: latency-svc-2dpgw
Jan 28 12:58:01.651: INFO: Created: latency-svc-2hxtx
Jan 28 12:58:02.047: INFO: Got endpoints: latency-svc-2dpgw [3.535191312s]
Jan 28 12:58:02.951: INFO: Got endpoints: latency-svc-2hxtx [4.181699943s]
Jan 28 12:58:03.014: INFO: Created: latency-svc-9btld
Jan 28 12:58:03.354: INFO: Got endpoints: latency-svc-9btld [4.364791165s]
Jan 28 12:58:03.364: INFO: Created: latency-svc-tdmph
Jan 28 12:58:03.454: INFO: Got endpoints: latency-svc-tdmph [4.260407148s]
Jan 28 12:58:03.528: INFO: Created: latency-svc-gdvl5
Jan 28 12:58:03.641: INFO: Got endpoints: latency-svc-gdvl5 [4.050306685s]
Jan 28 12:58:03.655: INFO: Created: latency-svc-ld5zj
Jan 28 12:58:03.678: INFO: Got endpoints: latency-svc-ld5zj [4.069554451s]
Jan 28 12:58:03.815: INFO: Created: latency-svc-9pfwm
Jan 28 12:58:03.842: INFO: Got endpoints: latency-svc-9pfwm [4.030174659s]
Jan 28 12:58:03.913: INFO: Created: latency-svc-jdd9q
Jan 28 12:58:04.036: INFO: Got endpoints: latency-svc-jdd9q [3.987884151s]
Jan 28 12:58:04.090: INFO: Created: latency-svc-gslqm
Jan 28 12:58:04.102: INFO: Got endpoints: latency-svc-gslqm [3.817662157s]
Jan 28 12:58:04.254: INFO: Created: latency-svc-bxbgk
Jan 28 12:58:04.270: INFO: Got endpoints: latency-svc-bxbgk [3.597439021s]
Jan 28 12:58:04.414: INFO: Created: latency-svc-tnjgr
Jan 28 12:58:04.431: INFO: Got endpoints: latency-svc-tnjgr [3.74345768s]
Jan 28 12:58:04.484: INFO: Created: latency-svc-7tw5x
Jan 28 12:58:04.616: INFO: Got endpoints: latency-svc-7tw5x [3.748581042s]
Jan 28 12:58:04.662: INFO: Created: latency-svc-xlqqd
Jan 28 12:58:04.694: INFO: Got endpoints: latency-svc-xlqqd [3.784609505s]
Jan 28 12:58:04.905: INFO: Created: latency-svc-6x6rz
Jan 28 12:58:04.939: INFO: Got endpoints: latency-svc-6x6rz [3.87290974s]
Jan 28 12:58:05.121: INFO: Created: latency-svc-gkhhx
Jan 28 12:58:05.406: INFO: Got endpoints: latency-svc-gkhhx [4.059993933s]
Jan 28 12:58:05.435: INFO: Created: latency-svc-hscmh
Jan 28 12:58:05.474: INFO: Got endpoints: latency-svc-hscmh [3.425914521s]
Jan 28 12:58:05.502: INFO: Created: latency-svc-crndf
Jan 28 12:58:05.611: INFO: Got endpoints: latency-svc-crndf [2.660199118s]
Jan 28 12:58:05.821: INFO: Created: latency-svc-txc8v
Jan 28 12:58:05.841: INFO: Got endpoints: latency-svc-txc8v [2.486922979s]
Jan 28 12:58:06.042: INFO: Created: latency-svc-zgcqw
Jan 28 12:58:06.065: INFO: Got endpoints: latency-svc-zgcqw [2.610837193s]
Jan 28 12:58:06.123: INFO: Created: latency-svc-6ptm4
Jan 28 12:58:06.254: INFO: Got endpoints: latency-svc-6ptm4 [2.612710585s]
Jan 28 12:58:06.496: INFO: Created: latency-svc-fqzxr
Jan 28 12:58:06.539: INFO: Got endpoints: latency-svc-fqzxr [2.859799406s]
Jan 28 12:58:06.729: INFO: Created: latency-svc-6ldzl
Jan 28 12:58:06.741: INFO: Got endpoints: latency-svc-6ldzl [2.898569056s]
Jan 28 12:58:06.911: INFO: Created: latency-svc-hdsj7
Jan 28 12:58:06.990: INFO: Got endpoints: latency-svc-hdsj7 [2.952725966s]
Jan 28 12:58:07.010: INFO: Created: latency-svc-hgmhq
Jan 28 12:58:07.096: INFO: Got endpoints: latency-svc-hgmhq [2.99299231s]
Jan 28 12:58:07.478: INFO: Created: latency-svc-dpwkf
Jan 28 12:58:07.515: INFO: Got endpoints: latency-svc-dpwkf [3.243813279s]
Jan 28 12:58:07.548: INFO: Created: latency-svc-dd4gx
Jan 28 12:58:07.702: INFO: Got endpoints: latency-svc-dd4gx [3.271602928s]
Jan 28 12:58:07.721: INFO: Created: latency-svc-tc9zx
Jan 28 12:58:07.736: INFO: Got endpoints: latency-svc-tc9zx [3.119315601s]
Jan 28 12:58:07.945: INFO: Created: latency-svc-nlb2l
Jan 28 12:58:08.076: INFO: Got endpoints: latency-svc-nlb2l [3.381167986s]
Jan 28 12:58:08.119: INFO: Created: latency-svc-4bk5h
Jan 28 12:58:08.144: INFO: Got endpoints: latency-svc-4bk5h [3.204248202s]
Jan 28 12:58:08.362: INFO: Created: latency-svc-ddwfh
Jan 28 12:58:08.378: INFO: Got endpoints: latency-svc-ddwfh [2.971707828s]
Jan 28 12:58:08.545: INFO: Created: latency-svc-5v2lb
Jan 28 12:58:08.605: INFO: Got endpoints: latency-svc-5v2lb [3.131183339s]
Jan 28 12:58:08.746: INFO: Created: latency-svc-wfk2c
Jan 28 12:58:08.767: INFO: Got endpoints: latency-svc-wfk2c [3.155326639s]
Jan 28 12:58:08.912: INFO: Created: latency-svc-t7rl4
Jan 28 12:58:08.933: INFO: Got endpoints: latency-svc-t7rl4 [3.091036886s]
Jan 28 12:58:09.136: INFO: Created: latency-svc-5wthc
Jan 28 12:58:09.166: INFO: Got endpoints: latency-svc-5wthc [3.100807763s]
Jan 28 12:58:09.387: INFO: Created: latency-svc-ctr24
Jan 28 12:58:09.449: INFO: Created: latency-svc-bdhmp
Jan 28 12:58:09.457: INFO: Got endpoints: latency-svc-ctr24 [3.202100488s]
Jan 28 12:58:09.669: INFO: Got endpoints: latency-svc-bdhmp [3.129291073s]
Jan 28 12:58:09.691: INFO: Created: latency-svc-bn8hd
Jan 28 12:58:09.729: INFO: Got endpoints: latency-svc-bn8hd [2.987877321s]
Jan 28 12:58:09.866: INFO: Created: latency-svc-7vnds
Jan 28 12:58:09.987: INFO: Got endpoints: latency-svc-7vnds [2.996144554s]
Jan 28 12:58:10.002: INFO: Created: latency-svc-cdsk2
Jan 28 12:58:10.078: INFO: Got endpoints: latency-svc-cdsk2 [2.982236464s]
Jan 28 12:58:10.110: INFO: Created: latency-svc-jcckt
Jan 28 12:58:10.144: INFO: Got endpoints: latency-svc-jcckt [2.628724997s]
Jan 28 12:58:10.196: INFO: Created: latency-svc-q4twn
Jan 28 12:58:10.218: INFO: Got endpoints: latency-svc-q4twn [2.514944995s]
Jan 28 12:58:11.558: INFO: Created: latency-svc-xvtsx
Jan 28 12:58:11.563: INFO: Got endpoints: latency-svc-xvtsx [3.827023567s]
Jan 28 12:58:11.710: INFO: Created: latency-svc-xcm7f
Jan 28 12:58:11.731: INFO: Got endpoints: latency-svc-xcm7f [3.654915568s]
Jan 28 12:58:11.909: INFO: Created: latency-svc-657zf
Jan 28 12:58:11.929: INFO: Got endpoints: latency-svc-657zf [3.783978916s]
Jan 28 12:58:12.039: INFO: Created: latency-svc-b2nbw
Jan 28 12:58:12.055: INFO: Got endpoints: latency-svc-b2nbw [3.677088269s]
Jan 28 12:58:12.135: INFO: Created: latency-svc-52tvj
Jan 28 12:58:12.242: INFO: Got endpoints: latency-svc-52tvj [3.635634686s]
Jan 28 12:58:12.267: INFO: Created: latency-svc-2s8sv
Jan 28 12:58:12.285: INFO: Got endpoints: latency-svc-2s8sv [3.51785319s]
Jan 28 12:58:12.464: INFO: Created: latency-svc-wzd9l
Jan 28 12:58:12.512: INFO: Got endpoints: latency-svc-wzd9l [3.578647214s]
Jan 28 12:58:12.722: INFO: Created: latency-svc-29smg
Jan 28 12:58:12.874: INFO: Got endpoints: latency-svc-29smg [3.707221748s]
Jan 28 12:58:12.891: INFO: Created: latency-svc-vvjks
Jan 28 12:58:12.965: INFO: Created: latency-svc-zv5cg
Jan 28 12:58:12.976: INFO: Got endpoints: latency-svc-vvjks [3.518714135s]
Jan 28 12:58:13.147: INFO: Created: latency-svc-6tn84
Jan 28 12:58:13.158: INFO: Got endpoints: latency-svc-zv5cg [3.48856755s]
Jan 28 12:58:13.337: INFO: Got endpoints: latency-svc-6tn84 [3.607728003s]
Jan 28 12:58:13.371: INFO: Created: latency-svc-4f6md
Jan 28 12:58:13.510: INFO: Created: latency-svc-m6ms7
Jan 28 12:58:13.512: INFO: Got endpoints: latency-svc-4f6md [3.525099722s]
Jan 28 12:58:13.530: INFO: Got endpoints: latency-svc-m6ms7 [3.45058744s]
Jan 28 12:58:13.699: INFO: Created: latency-svc-gkj58
Jan 28 12:58:13.725: INFO: Got endpoints: latency-svc-gkj58 [3.580790853s]
Jan 28 12:58:13.907: INFO: Created: latency-svc-tkgfs
Jan 28 12:58:13.935: INFO: Got endpoints: latency-svc-tkgfs [3.716628888s]
Jan 28 12:58:14.110: INFO: Created: latency-svc-nbsm5
Jan 28 12:58:14.177: INFO: Created: latency-svc-6sxws
Jan 28 12:58:14.179: INFO: Got endpoints: latency-svc-nbsm5 [2.614986899s]
Jan 28 12:58:14.311: INFO: Got endpoints: latency-svc-6sxws [2.579031951s]
Jan 28 12:58:14.363: INFO: Created: latency-svc-7r7qc
Jan 28 12:58:14.586: INFO: Created: latency-svc-shxkc
Jan 28 12:58:14.586: INFO: Got endpoints: latency-svc-7r7qc [2.656436629s]
Jan 28 12:58:14.612: INFO: Got endpoints: latency-svc-shxkc [2.556698074s]
Jan 28 12:58:14.776: INFO: Created: latency-svc-xxn4p
Jan 28 12:58:14.787: INFO: Got endpoints: latency-svc-xxn4p [2.544610331s]
Jan 28 12:58:14.907: INFO: Created: latency-svc-mxpkl
Jan 28 12:58:14.955: INFO: Got endpoints: latency-svc-mxpkl [2.669775502s]
Jan 28 12:58:14.986: INFO: Created: latency-svc-5cfzb
Jan 28 12:58:15.000: INFO: Got endpoints: latency-svc-5cfzb [2.487278147s]
Jan 28 12:58:15.045: INFO: Created: latency-svc-888rb
Jan 28 12:58:15.131: INFO: Got endpoints: latency-svc-888rb [2.246412326s]
Jan 28 12:58:15.162: INFO: Created: latency-svc-99qg9
Jan 28 12:58:15.174: INFO: Got endpoints: latency-svc-99qg9 [2.197582228s]
Jan 28 12:58:15.228: INFO: Created: latency-svc-p95pz
Jan 28 12:58:15.437: INFO: Got endpoints: latency-svc-p95pz [2.278400816s]
Jan 28 12:58:15.556: INFO: Created: latency-svc-ck8b6
Jan 28 12:58:15.572: INFO: Got endpoints: latency-svc-ck8b6 [2.234685145s]
Jan 28 12:58:15.743: INFO: Created: latency-svc-mrq76
Jan 28 12:58:15.752: INFO: Got endpoints: latency-svc-mrq76 [2.239621028s]
Jan 28 12:58:15.819: INFO: Created: latency-svc-pddsl
Jan 28 12:58:15.820: INFO: Got endpoints: latency-svc-pddsl [2.289839456s]
Jan 28 12:58:15.960: INFO: Created: latency-svc-7qkc4
Jan 28 12:58:16.018: INFO: Got endpoints: latency-svc-7qkc4 [2.291540911s]
Jan 28 12:58:16.118: INFO: Created: latency-svc-8sp62
Jan 28 12:58:16.135: INFO: Got endpoints: latency-svc-8sp62 [2.199933809s]
Jan 28 12:58:16.197: INFO: Created: latency-svc-lf9rk
Jan 28 12:58:16.340: INFO: Got endpoints: latency-svc-lf9rk [2.1612593s]
Jan 28 12:58:16.402: INFO: Created: latency-svc-7q54m
Jan 28 12:58:16.602: INFO: Got endpoints: latency-svc-7q54m [2.290257519s]
Jan 28 12:58:16.612: INFO: Created: latency-svc-4m8vd
Jan 28 12:58:16.625: INFO: Got endpoints: latency-svc-4m8vd [2.03856784s]
Jan 28 12:58:16.920: INFO: Created: latency-svc-lznmg
Jan 28 12:58:17.100: INFO: Got endpoints: latency-svc-lznmg [2.48730014s]
Jan 28 12:58:17.131: INFO: Created: latency-svc-jbw2t
Jan 28 12:58:17.149: INFO: Got endpoints: latency-svc-jbw2t [2.36189888s]
Jan 28 12:58:17.356: INFO: Created: latency-svc-swxvc
Jan 28 12:58:17.374: INFO: Got endpoints: latency-svc-swxvc [2.418576827s]
Jan 28 12:58:17.526: INFO: Created: latency-svc-r79lf
Jan 28 12:58:17.564: INFO: Got endpoints: latency-svc-r79lf [2.564284624s]
Jan 28 12:58:17.671: INFO: Created: latency-svc-vxnv5
Jan 28 12:58:17.722: INFO: Got endpoints: latency-svc-vxnv5 [2.590388636s]
Jan 28 12:58:17.755: INFO: Created: latency-svc-c6l2g
Jan 28 12:58:17.858: INFO: Got endpoints: latency-svc-c6l2g [2.683590284s]
Jan 28 12:58:17.901: INFO: Created: latency-svc-bswmp
Jan 28 12:58:17.922: INFO: Got endpoints: latency-svc-bswmp [2.484253489s]
Jan 28 12:58:18.074: INFO: Created: latency-svc-f4rf4
Jan 28 12:58:18.078: INFO: Got endpoints: latency-svc-f4rf4 [2.505202675s]
Jan 28 12:58:18.127: INFO: Created: latency-svc-zvhb7
Jan 28 12:58:18.415: INFO: Created: latency-svc-k8rpn
Jan 28 12:58:18.415: INFO: Got endpoints: latency-svc-zvhb7 [2.662837088s]
Jan 28 12:58:18.433: INFO: Got endpoints: latency-svc-k8rpn [2.613606675s]
Jan 28 12:58:18.583: INFO: Created: latency-svc-d59lv
Jan 28 12:58:18.606: INFO: Got endpoints: latency-svc-d59lv [2.587688549s]
Jan 28 12:58:18.801: INFO: Created: latency-svc-gvftb
Jan 28 12:58:18.802: INFO: Got endpoints: latency-svc-gvftb [2.66594007s]
Jan 28 12:58:18.863: INFO: Created: latency-svc-dlw6b
Jan 28 12:58:18.963: INFO: Got endpoints: latency-svc-dlw6b [2.622546483s]
Jan 28 12:58:19.142: INFO: Created: latency-svc-vwpdf
Jan 28 12:58:19.163: INFO: Got endpoints: latency-svc-vwpdf [2.561399257s]
Jan 28 12:58:19.359: INFO: Created: latency-svc-rtwwp
Jan 28 12:58:19.381: INFO: Got endpoints: latency-svc-rtwwp [2.756272522s]
Jan 28 12:58:19.428: INFO: Created: latency-svc-bsb6v
Jan 28 12:58:19.440: INFO: Got endpoints: latency-svc-bsb6v [2.339201357s]
Jan 28 12:58:19.741: INFO: Created: latency-svc-j7bkv
Jan 28 12:58:19.812: INFO: Got endpoints: latency-svc-j7bkv [2.662731526s]
Jan 28 12:58:19.855: INFO: Created: latency-svc-xfbfc
Jan 28 12:58:19.885: INFO: Got endpoints: latency-svc-xfbfc [2.510317925s]
Jan 28 12:58:20.006: INFO: Created: latency-svc-8q9mh
Jan 28 12:58:20.023: INFO: Got endpoints: latency-svc-8q9mh [2.457853552s]
Jan 28 12:58:20.150: INFO: Created: latency-svc-8w5r7
Jan 28 12:58:20.203: INFO: Got endpoints: latency-svc-8w5r7 [2.48074624s]
Jan 28 12:58:20.332: INFO: Created: latency-svc-gmhzx
Jan 28 12:58:20.353: INFO: Got endpoints: latency-svc-gmhzx [2.493750466s]
Jan 28 12:58:20.402: INFO: Created: latency-svc-zdddq
Jan 28 12:58:20.489: INFO: Got endpoints: latency-svc-zdddq [2.567224984s]
Jan 28 12:58:20.555: INFO: Created: latency-svc-d5cxh
Jan 28 12:58:20.608: INFO: Got endpoints: latency-svc-d5cxh [2.52990073s]
Jan 28 12:58:20.797: INFO: Created: latency-svc-dgttm
Jan 28 12:58:20.824: INFO: Got endpoints: latency-svc-dgttm [2.408005183s]
Jan 28 12:58:20.863: INFO: Created: latency-svc-fkq4x
Jan 28 12:58:20.984: INFO: Got endpoints: latency-svc-fkq4x [2.55081303s]
Jan 28 12:58:21.026: INFO: Created: latency-svc-nplnb
Jan 28 12:58:21.058: INFO: Got endpoints: latency-svc-nplnb [2.451349276s]
Jan 28 12:58:21.163: INFO: Created: latency-svc-hhtsv
Jan 28 12:58:21.173: INFO: Got endpoints: latency-svc-hhtsv [2.371513148s]
Jan 28 12:58:21.237: INFO: Created: latency-svc-dgcdl
Jan 28 12:58:21.376: INFO: Got endpoints: latency-svc-dgcdl [2.412054892s]
Jan 28 12:58:21.406: INFO: Created: latency-svc-mg9xw
Jan 28 12:58:21.562: INFO: Got endpoints: latency-svc-mg9xw [2.39857024s]
Jan 28 12:58:21.571: INFO: Created: latency-svc-wkprf
Jan 28 12:58:21.624: INFO: Got endpoints: latency-svc-wkprf [2.242200756s]
Jan 28 12:58:21.803: INFO: Created: latency-svc-764p4
Jan 28 12:58:21.826: INFO: Got endpoints: latency-svc-764p4 [2.385805239s]
Jan 28 12:58:22.129: INFO: Created: latency-svc-4r27z
Jan 28 12:58:22.171: INFO: Got endpoints: latency-svc-4r27z [2.357517212s]
Jan 28 12:58:22.411: INFO: Created: latency-svc-tx8jb
Jan 28 12:58:22.606: INFO: Got endpoints: latency-svc-tx8jb [2.719973074s]
Jan 28 12:58:22.606: INFO: Latencies: [186.463784ms 227.66415ms 386.022518ms 458.40604ms 839.269684ms 1.065566916s 1.276931396s 1.538803931s 1.746463451s 1.819209579s 2.017356371s 2.03856784s 2.1612593s 2.197582228s 2.199751565s 2.199933809s 2.212877075s 2.233202874s 2.234685145s 2.239621028s 2.242200756s 2.246412326s 2.263530296s 2.278400816s 2.289839456s 2.290257519s 2.291540911s 2.303063126s 2.31798705s 2.339201357s 2.357517212s 2.36189888s 2.367809669s 2.371513148s 2.385805239s 2.388892829s 2.39857024s 2.408005183s 2.412054892s 2.418576827s 2.423786343s 2.426209073s 2.432157701s 2.432636315s 2.446193397s 2.451349276s 2.453599768s 2.457602346s 2.457853552s 2.48074624s 2.484253489s 2.486922979s 2.487278147s 2.48730014s 2.490115959s 2.493750466s 2.502092385s 2.505202675s 2.510317925s 2.514944995s 2.52990073s 2.544610331s 2.55081303s 2.556698074s 2.561399257s 2.564284624s 2.567224984s 2.579031951s 2.582012679s 2.587688549s 2.590388636s 2.610837193s 2.612710585s 2.613606675s 2.614986899s 2.622546483s 2.628724997s 2.656436629s 2.660199118s 2.662731526s 2.662837088s 2.66594007s 2.669775502s 2.683590284s 2.692034095s 2.719973074s 2.743638183s 2.748805405s 2.755260466s 2.756272522s 2.768086648s 2.792363422s 2.859799406s 2.874127757s 2.889786952s 2.898569056s 2.919557042s 2.933557299s 2.933723022s 2.948042718s 2.94837108s 2.952725966s 2.952892384s 2.954917916s 2.958553697s 2.96557424s 2.969064394s 2.971707828s 2.982236464s 2.987877321s 2.99299231s 2.993977874s 2.996144554s 3.001344396s 3.010314761s 3.02349493s 3.045598934s 3.091036886s 3.098873759s 3.100807763s 3.107152264s 3.119315601s 3.12359213s 3.128241624s 3.129291073s 3.131183339s 3.136657178s 3.153302913s 3.155326639s 3.202100488s 3.204248202s 3.208390309s 3.236180009s 3.243813279s 3.258721367s 3.271602928s 3.27538056s 3.381167986s 3.403635868s 3.411142839s 3.425912624s 3.425914521s 3.426552305s 3.44406049s 3.447903961s 3.449941168s 3.45058744s 3.481336429s 3.48856755s 3.492039541s 3.506583565s 3.51785319s 3.518714135s 3.525099722s 3.531448693s 3.535191312s 3.578647214s 3.580790853s 3.597439021s 3.601682578s 3.602395201s 3.607728003s 3.635634686s 3.654915568s 3.677088269s 3.707221748s 3.716628888s 3.719956312s 3.727490901s 3.74345768s 3.748581042s 3.783978916s 3.784609505s 3.817662157s 3.827023567s 3.865476439s 3.87290974s 3.958974603s 3.987884151s 3.994862836s 4.030174659s 4.050306685s 4.052150943s 4.059993933s 4.060814758s 4.069554451s 4.118766478s 4.172253575s 4.181699943s 4.20834867s 4.219689211s 4.245049882s 4.260407148s 4.288664339s 4.306575682s 4.314846257s 4.364791165s 4.377351268s 4.416501229s 4.725117435s]
Jan 28 12:58:22.607: INFO: 50 %ile: 2.94837108s
Jan 28 12:58:22.607: INFO: 90 %ile: 4.030174659s
Jan 28 12:58:22.607: INFO: 99 %ile: 4.416501229s
Jan 28 12:58:22.607: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:58:22.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-c89v6" for this suite.
Jan 28 12:59:34.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:59:34.782: INFO: namespace: e2e-tests-svc-latency-c89v6, resource: bindings, ignored listing per whitelist
Jan 28 12:59:34.839: INFO: namespace e2e-tests-svc-latency-c89v6 deletion completed in 1m12.207850926s

• [SLOW TEST:125.871 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:59:34.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 28 12:59:35.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-smtb9" to be "success or failure"
Jan 28 12:59:35.125: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.521089ms
Jan 28 12:59:37.270: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159452117s
Jan 28 12:59:39.303: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192200295s
Jan 28 12:59:41.337: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226551545s
Jan 28 12:59:43.785: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.674674627s
Jan 28 12:59:45.813: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.702028075s
Jan 28 12:59:47.852: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.740850623s
Jan 28 12:59:49.884: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.772940547s
STEP: Saw pod success
Jan 28 12:59:49.884: INFO: Pod "downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 12:59:49.892: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005 container client-container: 
STEP: delete the pod
Jan 28 12:59:50.609: INFO: Waiting for pod downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005 to disappear
Jan 28 12:59:50.845: INFO: Pod downwardapi-volume-0890c450-41ce-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 12:59:50.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-smtb9" for this suite.
Jan 28 12:59:56.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 12:59:57.029: INFO: namespace: e2e-tests-projected-smtb9, resource: bindings, ignored listing per whitelist
Jan 28 12:59:57.096: INFO: namespace e2e-tests-projected-smtb9 deletion completed in 6.217719778s

• [SLOW TEST:22.256 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 12:59:57.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-15e83133-41ce-11ea-a04a-0242ac110005
STEP: Creating secret with name s-test-opt-upd-15e8330a-41ce-11ea-a04a-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-15e83133-41ce-11ea-a04a-0242ac110005
STEP: Updating secret s-test-opt-upd-15e8330a-41ce-11ea-a04a-0242ac110005
STEP: Creating secret with name s-test-opt-create-15e83379-41ce-11ea-a04a-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:01:37.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tzq5j" for this suite.
Jan 28 13:02:17.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:02:17.879: INFO: namespace: e2e-tests-projected-tzq5j, resource: bindings, ignored listing per whitelist
Jan 28 13:02:17.905: INFO: namespace e2e-tests-projected-tzq5j deletion completed in 40.232659924s

• [SLOW TEST:140.809 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:02:17.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 28 13:02:26.652: INFO: 10 pods remaining
Jan 28 13:02:26.652: INFO: 10 pods has nil DeletionTimestamp
Jan 28 13:02:26.652: INFO: 
Jan 28 13:02:27.075: INFO: 10 pods remaining
Jan 28 13:02:27.076: INFO: 10 pods has nil DeletionTimestamp
Jan 28 13:02:27.076: INFO: 
Jan 28 13:02:28.784: INFO: 9 pods remaining
Jan 28 13:02:28.784: INFO: 0 pods has nil DeletionTimestamp
Jan 28 13:02:28.784: INFO: 
Jan 28 13:02:31.228: INFO: 0 pods remaining
Jan 28 13:02:31.229: INFO: 0 pods has nil DeletionTimestamp
Jan 28 13:02:31.229: INFO: 
STEP: Gathering metrics
W0128 13:02:32.264554       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 13:02:32.264: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:02:32.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4br8s" for this suite.
Jan 28 13:02:46.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:02:46.602: INFO: namespace: e2e-tests-gc-4br8s, resource: bindings, ignored listing per whitelist
Jan 28 13:02:46.698: INFO: namespace e2e-tests-gc-4br8s deletion completed in 14.42597118s

• [SLOW TEST:28.793 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:02:46.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan 28 13:02:46.909: INFO: Waiting up to 5m0s for pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005" in namespace "e2e-tests-containers-khk8j" to be "success or failure"
Jan 28 13:02:46.913: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10629ms
Jan 28 13:02:48.944: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034727647s
Jan 28 13:02:50.994: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085392549s
Jan 28 13:02:53.010: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101379225s
Jan 28 13:02:55.117: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207735343s
Jan 28 13:02:57.136: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.226917234s
Jan 28 13:02:59.177: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.268030512s
STEP: Saw pod success
Jan 28 13:02:59.177: INFO: Pod "client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 13:02:59.205: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 13:02:59.389: INFO: Waiting for pod client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005 to disappear
Jan 28 13:02:59.425: INFO: Pod client-containers-7ae1a49f-41ce-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:02:59.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-khk8j" for this suite.
Jan 28 13:03:05.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:03:05.538: INFO: namespace: e2e-tests-containers-khk8j, resource: bindings, ignored listing per whitelist
Jan 28 13:03:05.600: INFO: namespace e2e-tests-containers-khk8j deletion completed in 6.162248902s

• [SLOW TEST:18.901 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:03:05.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 28 13:03:05.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-s5n6b,SelfLink:/api/v1/namespaces/e2e-tests-watch-s5n6b/configmaps/e2e-watch-test-watch-closed,UID:8623c9d0-41ce-11ea-a994-fa163e34d433,ResourceVersion:19751567,Generation:0,CreationTimestamp:2020-01-28 13:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 13:03:05.793: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-s5n6b,SelfLink:/api/v1/namespaces/e2e-tests-watch-s5n6b/configmaps/e2e-watch-test-watch-closed,UID:8623c9d0-41ce-11ea-a994-fa163e34d433,ResourceVersion:19751568,Generation:0,CreationTimestamp:2020-01-28 13:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 28 13:03:05.811: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-s5n6b,SelfLink:/api/v1/namespaces/e2e-tests-watch-s5n6b/configmaps/e2e-watch-test-watch-closed,UID:8623c9d0-41ce-11ea-a994-fa163e34d433,ResourceVersion:19751569,Generation:0,CreationTimestamp:2020-01-28 13:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 13:03:05.811: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-s5n6b,SelfLink:/api/v1/namespaces/e2e-tests-watch-s5n6b/configmaps/e2e-watch-test-watch-closed,UID:8623c9d0-41ce-11ea-a994-fa163e34d433,ResourceVersion:19751570,Generation:0,CreationTimestamp:2020-01-28 13:03:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:03:05.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-s5n6b" for this suite.
Jan 28 13:03:11.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:03:12.027: INFO: namespace: e2e-tests-watch-s5n6b, resource: bindings, ignored listing per whitelist
Jan 28 13:03:12.110: INFO: namespace e2e-tests-watch-s5n6b deletion completed in 6.29058184s

• [SLOW TEST:6.509 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:03:12.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wv95q
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 28 13:03:12.420: INFO: Found 0 stateful pods, waiting for 3
Jan 28 13:03:22.511: INFO: Found 2 stateful pods, waiting for 3
Jan 28 13:03:32.859: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 13:03:32.859: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 13:03:32.859: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 13:03:42.435: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 13:03:42.435: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 13:03:42.435: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 13:03:42.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wv95q ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 13:03:43.071: INFO: stderr: "I0128 13:03:42.717214    3614 log.go:172] (0xc00015c6e0) (0xc0007d4640) Create stream\nI0128 13:03:42.717416    3614 log.go:172] (0xc00015c6e0) (0xc0007d4640) Stream added, broadcasting: 1\nI0128 13:03:42.721960    3614 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0128 13:03:42.721998    3614 log.go:172] (0xc00015c6e0) (0xc0005d0be0) Create stream\nI0128 13:03:42.722006    3614 log.go:172] (0xc00015c6e0) (0xc0005d0be0) Stream added, broadcasting: 3\nI0128 13:03:42.722840    3614 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0128 13:03:42.722861    3614 log.go:172] (0xc00015c6e0) (0xc00050c000) Create stream\nI0128 13:03:42.722870    3614 log.go:172] (0xc00015c6e0) (0xc00050c000) Stream added, broadcasting: 5\nI0128 13:03:42.723631    3614 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0128 13:03:42.935217    3614 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0128 13:03:42.935344    3614 log.go:172] (0xc0005d0be0) (3) Data frame handling\nI0128 13:03:42.935412    3614 log.go:172] (0xc0005d0be0) (3) Data frame sent\nI0128 13:03:43.057284    3614 log.go:172] (0xc00015c6e0) (0xc0005d0be0) Stream removed, broadcasting: 3\nI0128 13:03:43.057771    3614 log.go:172] (0xc00015c6e0) (0xc00050c000) Stream removed, broadcasting: 5\nI0128 13:03:43.057807    3614 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0128 13:03:43.057843    3614 log.go:172] (0xc0007d4640) (1) Data frame handling\nI0128 13:03:43.057863    3614 log.go:172] (0xc0007d4640) (1) Data frame sent\nI0128 13:03:43.057877    3614 log.go:172] (0xc00015c6e0) (0xc0007d4640) Stream removed, broadcasting: 1\nI0128 13:03:43.057898    3614 log.go:172] (0xc00015c6e0) Go away received\nI0128 13:03:43.058869    3614 log.go:172] (0xc00015c6e0) (0xc0007d4640) Stream removed, broadcasting: 1\nI0128 13:03:43.058907    3614 log.go:172] (0xc00015c6e0) (0xc0005d0be0) Stream removed, broadcasting: 3\nI0128 13:03:43.058919    3614 log.go:172] (0xc00015c6e0) (0xc00050c000) Stream removed, broadcasting: 5\n"
Jan 28 13:03:43.072: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 13:03:43.072: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 28 13:03:53.178: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 28 13:04:03.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wv95q ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 13:04:04.369: INFO: stderr: "I0128 13:04:03.700550    3636 log.go:172] (0xc000742370) (0xc0007d0640) Create stream\nI0128 13:04:03.700904    3636 log.go:172] (0xc000742370) (0xc0007d0640) Stream added, broadcasting: 1\nI0128 13:04:03.712136    3636 log.go:172] (0xc000742370) Reply frame received for 1\nI0128 13:04:03.712219    3636 log.go:172] (0xc000742370) (0xc000660d20) Create stream\nI0128 13:04:03.712291    3636 log.go:172] (0xc000742370) (0xc000660d20) Stream added, broadcasting: 3\nI0128 13:04:03.714247    3636 log.go:172] (0xc000742370) Reply frame received for 3\nI0128 13:04:03.714352    3636 log.go:172] (0xc000742370) (0xc0006fe000) Create stream\nI0128 13:04:03.714417    3636 log.go:172] (0xc000742370) (0xc0006fe000) Stream added, broadcasting: 5\nI0128 13:04:03.716890    3636 log.go:172] (0xc000742370) Reply frame received for 5\nI0128 13:04:04.101440    3636 log.go:172] (0xc000742370) Data frame received for 3\nI0128 13:04:04.101619    3636 log.go:172] (0xc000660d20) (3) Data frame handling\nI0128 13:04:04.101723    3636 log.go:172] (0xc000660d20) (3) Data frame sent\nI0128 13:04:04.344398    3636 log.go:172] (0xc000742370) Data frame received for 1\nI0128 13:04:04.344616    3636 log.go:172] (0xc000742370) (0xc0006fe000) Stream removed, broadcasting: 5\nI0128 13:04:04.344785    3636 log.go:172] (0xc0007d0640) (1) Data frame handling\nI0128 13:04:04.344831    3636 log.go:172] (0xc0007d0640) (1) Data frame sent\nI0128 13:04:04.344908    3636 log.go:172] (0xc000742370) (0xc000660d20) Stream removed, broadcasting: 3\nI0128 13:04:04.345059    3636 log.go:172] (0xc000742370) (0xc0007d0640) Stream removed, broadcasting: 1\nI0128 13:04:04.345087    3636 log.go:172] (0xc000742370) Go away received\nI0128 13:04:04.346093    3636 log.go:172] (0xc000742370) (0xc0007d0640) Stream removed, broadcasting: 1\nI0128 13:04:04.346108    3636 log.go:172] (0xc000742370) (0xc000660d20) Stream removed, broadcasting: 3\nI0128 13:04:04.346122    3636 log.go:172] (0xc000742370) (0xc0006fe000) Stream removed, broadcasting: 5\n"
Jan 28 13:04:04.370: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 13:04:04.370: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 13:04:14.433: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:04:14.433: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 13:04:14.433: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 13:04:24.572: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:04:24.572: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 13:04:24.572: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 13:04:34.470: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:04:34.470: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 13:04:44.477: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:04:44.477: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 13:04:54.458: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 28 13:05:04.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wv95q ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 13:05:05.196: INFO: stderr: "I0128 13:05:04.677807    3659 log.go:172] (0xc000138580) (0xc0005752c0) Create stream\nI0128 13:05:04.678075    3659 log.go:172] (0xc000138580) (0xc0005752c0) Stream added, broadcasting: 1\nI0128 13:05:04.689251    3659 log.go:172] (0xc000138580) Reply frame received for 1\nI0128 13:05:04.689303    3659 log.go:172] (0xc000138580) (0xc0002c0000) Create stream\nI0128 13:05:04.689312    3659 log.go:172] (0xc000138580) (0xc0002c0000) Stream added, broadcasting: 3\nI0128 13:05:04.690390    3659 log.go:172] (0xc000138580) Reply frame received for 3\nI0128 13:05:04.690417    3659 log.go:172] (0xc000138580) (0xc0002e2000) Create stream\nI0128 13:05:04.690428    3659 log.go:172] (0xc000138580) (0xc0002e2000) Stream added, broadcasting: 5\nI0128 13:05:04.691421    3659 log.go:172] (0xc000138580) Reply frame received for 5\nI0128 13:05:05.017130    3659 log.go:172] (0xc000138580) Data frame received for 3\nI0128 13:05:05.017226    3659 log.go:172] (0xc0002c0000) (3) Data frame handling\nI0128 13:05:05.017268    3659 log.go:172] (0xc0002c0000) (3) Data frame sent\nI0128 13:05:05.181617    3659 log.go:172] (0xc000138580) (0xc0002e2000) Stream removed, broadcasting: 5\nI0128 13:05:05.181808    3659 log.go:172] (0xc000138580) Data frame received for 1\nI0128 13:05:05.181836    3659 log.go:172] (0xc000138580) (0xc0002c0000) Stream removed, broadcasting: 3\nI0128 13:05:05.181879    3659 log.go:172] (0xc0005752c0) (1) Data frame handling\nI0128 13:05:05.181898    3659 log.go:172] (0xc0005752c0) (1) Data frame sent\nI0128 13:05:05.181915    3659 log.go:172] (0xc000138580) (0xc0005752c0) Stream removed, broadcasting: 1\nI0128 13:05:05.181935    3659 log.go:172] (0xc000138580) Go away received\nI0128 13:05:05.183536    3659 log.go:172] (0xc000138580) (0xc0005752c0) Stream removed, broadcasting: 1\nI0128 13:05:05.183551    3659 log.go:172] (0xc000138580) (0xc0002c0000) Stream removed, broadcasting: 3\nI0128 13:05:05.183558    3659 log.go:172] (0xc000138580) (0xc0002e2000) Stream removed, broadcasting: 5\n"
Jan 28 13:05:05.196: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 13:05:05.196: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 13:05:15.453: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 28 13:05:25.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wv95q ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 13:05:26.173: INFO: stderr: "I0128 13:05:25.852590    3680 log.go:172] (0xc00015c630) (0xc0006b5220) Create stream\nI0128 13:05:25.852958    3680 log.go:172] (0xc00015c630) (0xc0006b5220) Stream added, broadcasting: 1\nI0128 13:05:25.862511    3680 log.go:172] (0xc00015c630) Reply frame received for 1\nI0128 13:05:25.862635    3680 log.go:172] (0xc00015c630) (0xc00072a000) Create stream\nI0128 13:05:25.862648    3680 log.go:172] (0xc00015c630) (0xc00072a000) Stream added, broadcasting: 3\nI0128 13:05:25.864160    3680 log.go:172] (0xc00015c630) Reply frame received for 3\nI0128 13:05:25.864242    3680 log.go:172] (0xc00015c630) (0xc0006b52c0) Create stream\nI0128 13:05:25.864295    3680 log.go:172] (0xc00015c630) (0xc0006b52c0) Stream added, broadcasting: 5\nI0128 13:05:25.865638    3680 log.go:172] (0xc00015c630) Reply frame received for 5\nI0128 13:05:25.991590    3680 log.go:172] (0xc00015c630) Data frame received for 3\nI0128 13:05:25.991797    3680 log.go:172] (0xc00072a000) (3) Data frame handling\nI0128 13:05:25.991879    3680 log.go:172] (0xc00072a000) (3) Data frame sent\nI0128 13:05:26.155289    3680 log.go:172] (0xc00015c630) Data frame received for 1\nI0128 13:05:26.155616    3680 log.go:172] (0xc00015c630) (0xc00072a000) Stream removed, broadcasting: 3\nI0128 13:05:26.155845    3680 log.go:172] (0xc0006b5220) (1) Data frame handling\nI0128 13:05:26.155915    3680 log.go:172] (0xc0006b5220) (1) Data frame sent\nI0128 13:05:26.156290    3680 log.go:172] (0xc00015c630) (0xc0006b52c0) Stream removed, broadcasting: 5\nI0128 13:05:26.157385    3680 log.go:172] (0xc00015c630) (0xc0006b5220) Stream removed, broadcasting: 1\nI0128 13:05:26.157459    3680 log.go:172] (0xc00015c630) Go away received\nI0128 13:05:26.158991    3680 log.go:172] (0xc00015c630) (0xc0006b5220) Stream removed, broadcasting: 1\nI0128 13:05:26.159046    3680 log.go:172] (0xc00015c630) (0xc00072a000) Stream removed, broadcasting: 3\nI0128 13:05:26.159053    3680 log.go:172] (0xc00015c630) (0xc0006b52c0) Stream removed, broadcasting: 5\n"
Jan 28 13:05:26.174: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 13:05:26.174: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 13:05:37.241: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:05:37.242: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:05:37.242: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:05:47.573: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:05:47.573: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:05:47.573: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:05:57.275: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:05:57.275: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:05:57.275: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:06:07.286: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:06:07.287: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:06:17.271: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
Jan 28 13:06:17.271: INFO: Waiting for Pod e2e-tests-statefulset-wv95q/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 13:06:28.042: INFO: Waiting for StatefulSet e2e-tests-statefulset-wv95q/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 28 13:06:37.497: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wv95q
Jan 28 13:06:37.518: INFO: Scaling statefulset ss2 to 0
Jan 28 13:06:57.560: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 13:06:57.568: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:06:57.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wv95q" for this suite.
Jan 28 13:07:05.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:07:05.840: INFO: namespace: e2e-tests-statefulset-wv95q, resource: bindings, ignored listing per whitelist
Jan 28 13:07:05.936: INFO: namespace e2e-tests-statefulset-wv95q deletion completed in 8.313012859s

• [SLOW TEST:233.825 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:07:05.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-157bfa9e-41cf-11ea-a04a-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 28 13:07:06.478: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005" in namespace "e2e-tests-projected-b9trz" to be "success or failure"
Jan 28 13:07:06.697: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 218.403763ms
Jan 28 13:07:09.002: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524022722s
Jan 28 13:07:11.045: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566612312s
Jan 28 13:07:13.064: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58619039s
Jan 28 13:07:15.191: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.713071461s
Jan 28 13:07:17.215: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.736953078s
Jan 28 13:07:19.245: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.766727529s
STEP: Saw pod success
Jan 28 13:07:19.245: INFO: Pod "pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 13:07:19.251: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 13:07:19.450: INFO: Waiting for pod pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005 to disappear
Jan 28 13:07:19.457: INFO: Pod pod-projected-configmaps-158e3b40-41cf-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:07:19.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b9trz" for this suite.
Jan 28 13:07:26.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:07:26.675: INFO: namespace: e2e-tests-projected-b9trz, resource: bindings, ignored listing per whitelist
Jan 28 13:07:26.750: INFO: namespace e2e-tests-projected-b9trz deletion completed in 7.282463225s

• [SLOW TEST:20.813 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:07:26.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 28 13:07:26.871: INFO: Creating deployment "nginx-deployment"
Jan 28 13:07:27.028: INFO: Waiting for observed generation 1
Jan 28 13:07:30.806: INFO: Waiting for all required pods to come up
Jan 28 13:07:30.846: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 28 13:08:23.661: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 28 13:08:23.671: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 28 13:08:23.691: INFO: Updating deployment nginx-deployment
Jan 28 13:08:23.691: INFO: Waiting for observed generation 2
Jan 28 13:08:29.427: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 28 13:08:30.135: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 28 13:08:31.276: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 28 13:08:34.559: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 28 13:08:34.559: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 28 13:08:36.736: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 28 13:08:37.804: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 28 13:08:37.804: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 28 13:08:39.152: INFO: Updating deployment nginx-deployment
Jan 28 13:08:39.153: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 28 13:08:39.736: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 28 13:08:40.718: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 28 13:08:41.334: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-rz74q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rz74q/deployments/nginx-deployment,UID:21c33bc7-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752504,Generation:3,CreationTimestamp:2020-01-28 13:07:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-01-28 13:08:32 +0000 UTC 2020-01-28 13:07:27 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-28 13:08:39 +0000 UTC 2020-01-28 13:08:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 28 13:08:43.266: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-rz74q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rz74q/replicasets/nginx-deployment-5c98f8fb5,UID:43a21230-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752502,Generation:3,CreationTimestamp:2020-01-28 13:08:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 21c33bc7-41cf-11ea-a994-fa163e34d433 0xc00279bba7 0xc00279bba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 13:08:43.266: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 28 13:08:43.267: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-rz74q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rz74q/replicasets/nginx-deployment-85ddf47c5d,UID:21e33edf-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752500,Generation:3,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 21c33bc7-41cf-11ea-a994-fa163e34d433 0xc00279bc67 0xc00279bc68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 28 13:08:43.865: INFO: Pod "nginx-deployment-5c98f8fb5-7djt8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7djt8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-7djt8,UID:43c0b5a3-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752458,Generation:0,CreationTimestamp:2020-01-28 13:08:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026ce867 0xc0026ce868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cea80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026ceaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-28 13:08:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.866: INFO: Pod "nginx-deployment-5c98f8fb5-8gqf2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8gqf2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-8gqf2,UID:43e127fa-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752491,Generation:0,CreationTimestamp:2020-01-28 13:08:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026ceb67 0xc0026ceb68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026ceda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cedc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-28 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.868: INFO: Pod "nginx-deployment-5c98f8fb5-8v8sd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8v8sd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-8v8sd,UID:4ee1eca1-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752548,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cee87 0xc0026cee88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.869: INFO: Pod "nginx-deployment-5c98f8fb5-ddp86" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ddp86,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-ddp86,UID:4ee1ed73-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752551,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf0b7 0xc0026cf0b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf1b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.869: INFO: Pod "nginx-deployment-5c98f8fb5-dw4z5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dw4z5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-dw4z5,UID:4f5b3002-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752547,Generation:0,CreationTimestamp:2020-01-28 13:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf2a7 0xc0026cf2a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.869: INFO: Pod "nginx-deployment-5c98f8fb5-fkfz6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fkfz6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-fkfz6,UID:43e25075-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752474,Generation:0,CreationTimestamp:2020-01-28 13:08:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf400 0xc0026cf401}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-28 13:08:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.870: INFO: Pod "nginx-deployment-5c98f8fb5-gfrtn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gfrtn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-gfrtn,UID:4e265a26-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752538,Generation:0,CreationTimestamp:2020-01-28 13:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf557 0xc0026cf558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf5c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.870: INFO: Pod "nginx-deployment-5c98f8fb5-jrqf8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jrqf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-jrqf8,UID:4e22ee8a-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752540,Generation:0,CreationTimestamp:2020-01-28 13:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf657 0xc0026cf658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf6c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.870: INFO: Pod "nginx-deployment-5c98f8fb5-p4gvf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p4gvf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-p4gvf,UID:4ee286a0-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752545,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf757 0xc0026cf758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf7c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.871: INFO: Pod "nginx-deployment-5c98f8fb5-ptf97" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ptf97,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-ptf97,UID:477ee4be-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752498,Generation:0,CreationTimestamp:2020-01-28 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf857 0xc0026cf858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cf8c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cf8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-28 13:08:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.871: INFO: Pod "nginx-deployment-5c98f8fb5-spm6k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-spm6k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-spm6k,UID:4ee11619-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752556,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cf9a7 0xc0026cf9a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cfa10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cfa30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.872: INFO: Pod "nginx-deployment-5c98f8fb5-x54zp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x54zp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-x54zp,UID:4d2ee002-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752510,Generation:0,CreationTimestamp:2020-01-28 13:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cfaa7 0xc0026cfaa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cfb10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cfb30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.872: INFO: Pod "nginx-deployment-5c98f8fb5-xgjwz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xgjwz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-5c98f8fb5-xgjwz,UID:477ae52c-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752496,Generation:0,CreationTimestamp:2020-01-28 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 43a21230-41cf-11ea-a994-fa163e34d433 0xc0026cfba7 0xc0026cfba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cfc10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cfc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-28 13:08:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.872: INFO: Pod "nginx-deployment-85ddf47c5d-2r8tj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2r8tj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-2r8tj,UID:4e2285bb-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752539,Generation:0,CreationTimestamp:2020-01-28 13:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc0026cfcf7 0xc0026cfcf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cfd60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cfd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.872: INFO: Pod "nginx-deployment-85ddf47c5d-2vr8k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2vr8k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-2vr8k,UID:22052550-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752396,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc0026cfdf7 0xc0026cfdf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cfe60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cfe80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-28 13:07:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e2db084f4d274304d160724ae28e84e837d8d3b4feb61d90606ca1bebb8cb4c0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.873: INFO: Pod "nginx-deployment-85ddf47c5d-9nq8n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9nq8n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-9nq8n,UID:4ee37eef-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752549,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc0026cff47 0xc0026cff48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026cffb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026cffd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.873: INFO: Pod "nginx-deployment-85ddf47c5d-9zwl7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9zwl7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-9zwl7,UID:4ee4bf1c-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752554,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92047 0xc001e92048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e920b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e920d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.873: INFO: Pod "nginx-deployment-85ddf47c5d-bgrc5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bgrc5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-bgrc5,UID:22333b3b-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752398,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92147 0xc001e92148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e921b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e921d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-28 13:07:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://df465a073bff9c2636ddf23e4a5d2af1c7cb9939b1c962d2a73669b335a5f2ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.874: INFO: Pod "nginx-deployment-85ddf47c5d-d56lz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d56lz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-d56lz,UID:4cdd85ee-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752543,Generation:0,CreationTimestamp:2020-01-28 13:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92297 0xc001e92298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e92320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-28 13:08:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.874: INFO: Pod "nginx-deployment-85ddf47c5d-dqsmp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dqsmp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-dqsmp,UID:22227d64-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752423,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e923d7 0xc001e923d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e92460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-28 13:07:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://32bcb33aae189a32ee0f5ec27f5b286d36d654ceb2d3a83971f9134a986a57b7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.875: INFO: Pod "nginx-deployment-85ddf47c5d-f9c5x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f9c5x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-f9c5x,UID:4e1d1bd8-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752521,Generation:0,CreationTimestamp:2020-01-28 13:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92527 0xc001e92528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e925b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.875: INFO: Pod "nginx-deployment-85ddf47c5d-jnwgk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jnwgk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-jnwgk,UID:4d2e87a0-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752512,Generation:0,CreationTimestamp:2020-01-28 13:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92627 0xc001e92628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e926b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.875: INFO: Pod "nginx-deployment-85ddf47c5d-js67p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-js67p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-js67p,UID:4e2670d0-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752541,Generation:0,CreationTimestamp:2020-01-28 13:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92727 0xc001e92728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e927b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.875: INFO: Pod "nginx-deployment-85ddf47c5d-ljz6z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ljz6z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-ljz6z,UID:4d2e8069-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752515,Generation:0,CreationTimestamp:2020-01-28 13:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92827 0xc001e92828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e928b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.876: INFO: Pod "nginx-deployment-85ddf47c5d-ms9xl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ms9xl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-ms9xl,UID:222373c1-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752414,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92927 0xc001e92928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e929b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-28 13:07:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bf2c4157dd643c9e90574014dcf99073d9940096687f68c09177e5befa309fb0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.876: INFO: Pod "nginx-deployment-85ddf47c5d-nnrz6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nnrz6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-nnrz6,UID:22339103-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752419,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92a77 0xc001e92a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e92b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-28 13:07:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8c59fd557a08e4c96d0c792868bab71c6cba52ad3ff9f5307761f230b56efed0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.876: INFO: Pod "nginx-deployment-85ddf47c5d-nrtf6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nrtf6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-nrtf6,UID:4ee37a88-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752550,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92bc7 0xc001e92bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92c30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e92c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.876: INFO: Pod "nginx-deployment-85ddf47c5d-nz6qj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nz6qj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-nz6qj,UID:4ee2fb74-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752546,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92cc7 0xc001e92cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e92d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.877: INFO: Pod "nginx-deployment-85ddf47c5d-ptjp6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ptjp6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-ptjp6,UID:22242555-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752407,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92dd7 0xc001e92dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e92e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-28 13:07:31 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b6478fa2993deac84e0f04504e0ec837cbc01d0be2ea56c614057e6c65f94646}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.877: INFO: Pod "nginx-deployment-85ddf47c5d-qjcjh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qjcjh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-qjcjh,UID:2232d7ea-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752428,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e92f37 0xc001e92f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e92fb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e92fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-28 13:07:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4abecf6aae6cb1228716a30498106e8cf5ddea5334a0d9deea05cb23316f9b04}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.878: INFO: Pod "nginx-deployment-85ddf47c5d-slhxb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-slhxb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-slhxb,UID:4ee31686-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752552,Generation:0,CreationTimestamp:2020-01-28 13:08:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e93097 0xc001e93098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e93100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e93130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.878: INFO: Pod "nginx-deployment-85ddf47c5d-wqfbf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wqfbf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-wqfbf,UID:22080a65-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752431,Generation:0,CreationTimestamp:2020-01-28 13:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e933d7 0xc001e933d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e93440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e93470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:07:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-28 13:07:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:08:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://850ead6801da89a5c2815594edc5b2e26b317f0222698b9ced09bee49535f3f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:08:43.879: INFO: Pod "nginx-deployment-85ddf47c5d-zlc75" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zlc75,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-rz74q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rz74q/pods/nginx-deployment-85ddf47c5d-zlc75,UID:4e1ec593-41cf-11ea-a994-fa163e34d433,ResourceVersion:19752535,Generation:0,CreationTimestamp:2020-01-28 13:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 21e33edf-41cf-11ea-a994-fa163e34d433 0xc001e935b7 0xc001e935b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g564q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g564q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-g564q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e93630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e93650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:08:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:08:43.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rz74q" for this suite.
Jan 28 13:10:01.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:10:01.941: INFO: namespace: e2e-tests-deployment-rz74q, resource: bindings, ignored listing per whitelist
Jan 28 13:10:02.046: INFO: namespace e2e-tests-deployment-rz74q deletion completed in 1m16.975336486s

• [SLOW TEST:155.296 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:10:02.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 28 13:10:03.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:08.422: INFO: stderr: ""
Jan 28 13:10:08.423: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 13:10:08.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:08.963: INFO: stderr: ""
Jan 28 13:10:08.964: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jan 28 13:10:13.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:14.194: INFO: stderr: ""
Jan 28 13:10:14.195: INFO: stdout: "update-demo-nautilus-m62r9 update-demo-nautilus-vplwh "
Jan 28 13:10:14.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m62r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:14.988: INFO: stderr: ""
Jan 28 13:10:14.988: INFO: stdout: ""
Jan 28 13:10:14.988: INFO: update-demo-nautilus-m62r9 is created but not running
Jan 28 13:10:19.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:20.331: INFO: stderr: ""
Jan 28 13:10:20.332: INFO: stdout: "update-demo-nautilus-m62r9 update-demo-nautilus-vplwh "
Jan 28 13:10:20.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m62r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:20.579: INFO: stderr: ""
Jan 28 13:10:20.579: INFO: stdout: ""
Jan 28 13:10:20.579: INFO: update-demo-nautilus-m62r9 is created but not running
Jan 28 13:10:25.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:25.778: INFO: stderr: ""
Jan 28 13:10:25.778: INFO: stdout: "update-demo-nautilus-m62r9 update-demo-nautilus-vplwh "
Jan 28 13:10:25.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m62r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:26.956: INFO: stderr: ""
Jan 28 13:10:26.957: INFO: stdout: ""
Jan 28 13:10:26.957: INFO: update-demo-nautilus-m62r9 is created but not running
Jan 28 13:10:31.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:32.268: INFO: stderr: ""
Jan 28 13:10:32.268: INFO: stdout: "update-demo-nautilus-m62r9 update-demo-nautilus-vplwh "
Jan 28 13:10:32.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m62r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:32.499: INFO: stderr: ""
Jan 28 13:10:32.500: INFO: stdout: ""
Jan 28 13:10:32.500: INFO: update-demo-nautilus-m62r9 is created but not running
Jan 28 13:10:37.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:37.661: INFO: stderr: ""
Jan 28 13:10:37.661: INFO: stdout: "update-demo-nautilus-m62r9 update-demo-nautilus-vplwh "
Jan 28 13:10:37.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m62r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:37.802: INFO: stderr: ""
Jan 28 13:10:37.802: INFO: stdout: "true"
Jan 28 13:10:37.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m62r9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:38.050: INFO: stderr: ""
Jan 28 13:10:38.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 13:10:38.051: INFO: validating pod update-demo-nautilus-m62r9
Jan 28 13:10:38.128: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 13:10:38.128: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 13:10:38.128: INFO: update-demo-nautilus-m62r9 is verified up and running
Jan 28 13:10:38.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vplwh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:38.341: INFO: stderr: ""
Jan 28 13:10:38.341: INFO: stdout: "true"
Jan 28 13:10:38.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vplwh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:38.555: INFO: stderr: ""
Jan 28 13:10:38.556: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 13:10:38.556: INFO: validating pod update-demo-nautilus-vplwh
Jan 28 13:10:38.604: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 13:10:38.605: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 13:10:38.605: INFO: update-demo-nautilus-vplwh is verified up and running
STEP: using delete to clean up resources
Jan 28 13:10:38.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:38.774: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 13:10:38.775: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 28 13:10:38.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-pcctb'
Jan 28 13:10:38.976: INFO: stderr: "No resources found.\n"
Jan 28 13:10:38.976: INFO: stdout: ""
Jan 28 13:10:38.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-pcctb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 13:10:39.156: INFO: stderr: ""
Jan 28 13:10:39.157: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:10:39.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pcctb" for this suite.
Jan 28 13:11:05.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:11:05.299: INFO: namespace: e2e-tests-kubectl-pcctb, resource: bindings, ignored listing per whitelist
Jan 28 13:11:05.341: INFO: namespace e2e-tests-kubectl-pcctb deletion completed in 26.173387641s

• [SLOW TEST:63.293 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:11:05.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 28 13:11:05.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 28 13:11:05.747: INFO: stderr: ""
Jan 28 13:11:05.747: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:11:05.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wldvp" for this suite.
Jan 28 13:11:11.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:11:11.960: INFO: namespace: e2e-tests-kubectl-wldvp, resource: bindings, ignored listing per whitelist
Jan 28 13:11:12.027: INFO: namespace e2e-tests-kubectl-wldvp deletion completed in 6.272157915s

• [SLOW TEST:6.686 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:11:12.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 28 13:11:12.226: INFO: Waiting up to 5m0s for pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005" in namespace "e2e-tests-emptydir-vj9n7" to be "success or failure"
Jan 28 13:11:12.238: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.498394ms
Jan 28 13:11:14.941: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714734242s
Jan 28 13:11:16.981: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.754893165s
Jan 28 13:11:19.002: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.775572185s
Jan 28 13:11:21.530: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.30358147s
Jan 28 13:11:23.537: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.310889467s
Jan 28 13:11:25.597: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 13.371096887s
Jan 28 13:11:27.629: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.40251876s
STEP: Saw pod success
Jan 28 13:11:27.629: INFO: Pod "pod-a813cc4c-41cf-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 13:11:27.641: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a813cc4c-41cf-11ea-a04a-0242ac110005 container test-container: 
STEP: delete the pod
Jan 28 13:11:29.182: INFO: Waiting for pod pod-a813cc4c-41cf-11ea-a04a-0242ac110005 to disappear
Jan 28 13:11:29.237: INFO: Pod pod-a813cc4c-41cf-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:11:29.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vj9n7" for this suite.
Jan 28 13:11:37.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:11:37.617: INFO: namespace: e2e-tests-emptydir-vj9n7, resource: bindings, ignored listing per whitelist
Jan 28 13:11:37.696: INFO: namespace e2e-tests-emptydir-vj9n7 deletion completed in 8.384346988s

• [SLOW TEST:25.669 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:11:37.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nvplr
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-nvplr
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-nvplr
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-nvplr
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-nvplr
Jan 28 13:11:52.299: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nvplr, name: ss-0, uid: ba93719c-41cf-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 28 13:11:52.594: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nvplr, name: ss-0, uid: ba93719c-41cf-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 28 13:11:52.730: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nvplr, name: ss-0, uid: ba93719c-41cf-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 28 13:11:52.743: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-nvplr
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-nvplr
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-nvplr and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 28 13:12:05.229: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nvplr
Jan 28 13:12:05.236: INFO: Scaling statefulset ss to 0
Jan 28 13:12:15.287: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 13:12:15.301: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:12:15.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nvplr" for this suite.
Jan 28 13:12:23.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:12:23.856: INFO: namespace: e2e-tests-statefulset-nvplr, resource: bindings, ignored listing per whitelist
Jan 28 13:12:24.017: INFO: namespace e2e-tests-statefulset-nvplr deletion completed in 8.558756529s

• [SLOW TEST:46.321 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:12:24.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 28 13:12:44.581: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:44.659: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:12:46.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:46.685: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:12:48.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:48.721: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:12:50.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:50.676: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:12:52.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:52.677: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:12:54.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:54.676: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:12:56.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:56.681: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:12:58.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:12:58.674: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:13:00.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:13:00.698: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:13:02.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:13:02.750: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:13:04.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:13:04.675: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:13:06.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:13:06.890: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:13:08.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:13:08.701: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:13:10.662: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:13:10.682: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 13:13:12.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 13:13:12.798: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:13:13.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-n4tbf" for this suite.
Jan 28 13:13:37.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:13:37.669: INFO: namespace: e2e-tests-container-lifecycle-hook-n4tbf, resource: bindings, ignored listing per whitelist
Jan 28 13:13:37.678: INFO: namespace e2e-tests-container-lifecycle-hook-n4tbf deletion completed in 24.641745615s

• [SLOW TEST:73.659 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 28 13:13:37.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 28 13:13:38.068: INFO: Waiting up to 5m0s for pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005" in namespace "e2e-tests-var-expansion-5j9cr" to be "success or failure"
Jan 28 13:13:38.223: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 153.754969ms
Jan 28 13:13:40.814: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744902528s
Jan 28 13:13:42.927: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.857831783s
Jan 28 13:13:44.952: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.883260521s
Jan 28 13:13:46.979: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909686691s
Jan 28 13:13:49.465: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.396238321s
Jan 28 13:13:51.477: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.408296609s
Jan 28 13:13:54.472: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.402584138s
Jan 28 13:13:56.488: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.419369814s
STEP: Saw pod success
Jan 28 13:13:56.489: INFO: Pod "var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005" satisfied condition "success or failure"
Jan 28 13:13:56.501: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 28 13:13:56.731: INFO: Waiting for pod var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005 to disappear
Jan 28 13:13:56.880: INFO: Pod var-expansion-fef948b0-41cf-11ea-a04a-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 28 13:13:56.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-5j9cr" for this suite.
Jan 28 13:14:06.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:14:07.521: INFO: namespace: e2e-tests-var-expansion-5j9cr, resource: bindings, ignored listing per whitelist
Jan 28 13:14:07.540: INFO: namespace e2e-tests-var-expansion-5j9cr deletion completed in 10.650546423s

• [SLOW TEST:29.862 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSJan 28 13:14:07.542: INFO: Running AfterSuite actions on all nodes
Jan 28 13:14:07.542: INFO: Running AfterSuite actions on node 1
Jan 28 13:14:07.542: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8800.108 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS