I0201 10:47:14.936264 8 e2e.go:224] Starting e2e run "351548e5-44e0-11ea-a88d-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580554034 - Will randomize all specs Will run 201 of 2164 specs Feb 1 10:47:15.325: INFO: >>> kubeConfig: /root/.kube/config Feb 1 10:47:15.330: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 1 10:47:15.349: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 1 10:47:15.404: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 1 10:47:15.404: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 1 10:47:15.404: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 1 10:47:15.417: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 1 10:47:15.417: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 1 10:47:15.417: INFO: e2e test version: v1.13.12 Feb 1 10:47:15.418: INFO: kube-apiserver version: v1.13.8 S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:47:15.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Feb 1 10:47:15.630: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 1 10:47:15.743: INFO: Waiting up to 5m0s for pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-xctq4" to be "success or failure" Feb 1 10:47:15.759: INFO: Pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.317765ms Feb 1 10:47:17.779: INFO: Pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035928661s Feb 1 10:47:19.796: INFO: Pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052430924s Feb 1 10:47:21.815: INFO: Pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071861777s Feb 1 10:47:23.979: INFO: Pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2358477s Feb 1 10:47:26.072: INFO: Pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.328915249s STEP: Saw pod success Feb 1 10:47:26.073: INFO: Pod "pod-35ed99ca-44e0-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:47:26.082: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-35ed99ca-44e0-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 10:47:26.277: INFO: Waiting for pod pod-35ed99ca-44e0-11ea-a88d-0242ac110005 to disappear Feb 1 10:47:26.285: INFO: Pod pod-35ed99ca-44e0-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:47:26.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xctq4" for this suite. Feb 1 10:47:32.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:47:32.519: INFO: namespace: e2e-tests-emptydir-xctq4, resource: bindings, ignored listing per whitelist Feb 1 10:47:32.564: INFO: namespace e2e-tests-emptydir-xctq4 deletion completed in 6.273567986s • [SLOW TEST:17.146 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:47:32.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 10:47:32.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ktfrv' Feb 1 10:47:34.939: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 10:47:34.939: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 1 10:47:34.953: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 1 10:47:35.067: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 1 10:47:35.103: INFO: scanned /root for discovery docs: Feb 1 10:47:35.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-ktfrv' Feb 1 10:48:00.741: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 1 10:48:00.741: INFO: stdout: "Created e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a\nScaling up e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 1 10:48:00.741: INFO: stdout: "Created e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a\nScaling up e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 1 10:48:00.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ktfrv' Feb 1 10:48:01.152: INFO: stderr: "" Feb 1 10:48:01.152: INFO: stdout: "e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a-qfgrp e2e-test-nginx-rc-9gr8k " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 1 10:48:06.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ktfrv' Feb 1 10:48:06.366: INFO: stderr: "" Feb 1 10:48:06.366: INFO: stdout: "e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a-qfgrp " Feb 1 10:48:06.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a-qfgrp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ktfrv' Feb 1 10:48:06.488: INFO: stderr: "" Feb 1 10:48:06.488: INFO: stdout: "true" Feb 1 10:48:06.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a-qfgrp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ktfrv' Feb 1 10:48:06.605: INFO: stderr: "" Feb 1 10:48:06.606: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 1 10:48:06.606: INFO: e2e-test-nginx-rc-5e343b21f6bff3ca4da4a0e0dab3920a-qfgrp is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 1 10:48:06.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ktfrv' Feb 1 10:48:06.833: INFO: stderr: "" Feb 1 10:48:06.833: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:48:06.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ktfrv" for this suite. Feb 1 10:48:28.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:48:29.101: INFO: namespace: e2e-tests-kubectl-ktfrv, resource: bindings, ignored listing per whitelist Feb 1 10:48:29.326: INFO: namespace e2e-tests-kubectl-ktfrv deletion completed in 22.482721959s • [SLOW TEST:56.761 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:48:29.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 1 10:48:29.585: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180531,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 1 10:48:29.585: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180531,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 1 10:48:39.609: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180544,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 1 10:48:39.610: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180544,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 1 10:48:49.637: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180557,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 1 10:48:49.637: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180557,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 1 10:48:59.660: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180570,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 1 10:48:59.660: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-a,UID:620101d0-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180570,Generation:0,CreationTimestamp:2020-02-01 10:48:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 1 10:49:09.684: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-b,UID:79e5df60-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180583,Generation:0,CreationTimestamp:2020-02-01 10:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 1 10:49:09.684: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-b,UID:79e5df60-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180583,Generation:0,CreationTimestamp:2020-02-01 10:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 1 10:49:19.727: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-b,UID:79e5df60-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180596,Generation:0,CreationTimestamp:2020-02-01 10:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 1 10:49:19.727: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2tpck,SelfLink:/api/v1/namespaces/e2e-tests-watch-2tpck/configmaps/e2e-watch-test-configmap-b,UID:79e5df60-44e0-11ea-a994-fa163e34d433,ResourceVersion:20180596,Generation:0,CreationTimestamp:2020-02-01 10:49:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:49:29.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2tpck" for this suite. Feb 1 10:49:35.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:49:35.977: INFO: namespace: e2e-tests-watch-2tpck, resource: bindings, ignored listing per whitelist Feb 1 10:49:36.067: INFO: namespace e2e-tests-watch-2tpck deletion completed in 6.329146866s • [SLOW TEST:66.741 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:49:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-89cc7590-44e0-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 10:49:36.398: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-x5rwr" to be "success or failure" Feb 1 10:49:36.457: INFO: Pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 59.613664ms Feb 1 10:49:38.480: INFO: Pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081835099s Feb 1 10:49:40.509: INFO: Pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111753757s Feb 1 10:49:42.710: INFO: Pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312777265s Feb 1 10:49:44.730: INFO: Pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332077507s Feb 1 10:49:46.744: INFO: Pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.346027779s STEP: Saw pod success Feb 1 10:49:46.744: INFO: Pod "pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:49:46.748: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 1 10:49:46.827: INFO: Waiting for pod pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005 to disappear Feb 1 10:49:46.895: INFO: Pod pod-projected-configmaps-89ce04aa-44e0-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:49:46.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x5rwr" for this suite. Feb 1 10:49:52.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:49:53.004: INFO: namespace: e2e-tests-projected-x5rwr, resource: bindings, ignored listing per whitelist Feb 1 10:49:53.115: INFO: namespace e2e-tests-projected-x5rwr deletion completed in 6.19221168s • [SLOW TEST:17.047 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:49:53.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-93ee295b-44e0-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume secrets Feb 1 10:49:53.392: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-87qkj" to be "success or failure" Feb 1 10:49:53.412: INFO: Pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.089842ms Feb 1 10:49:55.567: INFO: Pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174140192s Feb 1 10:49:57.579: INFO: Pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186749951s Feb 1 10:49:59.977: INFO: Pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58433138s Feb 1 10:50:01.999: INFO: Pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606515773s Feb 1 10:50:04.026: INFO: Pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633705021s STEP: Saw pod success Feb 1 10:50:04.026: INFO: Pod "pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:50:04.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 1 10:50:04.327: INFO: Waiting for pod pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005 to disappear Feb 1 10:50:04.391: INFO: Pod pod-projected-secrets-93ef9e4c-44e0-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:50:04.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-87qkj" for this suite. Feb 1 10:50:11.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:50:11.445: INFO: namespace: e2e-tests-projected-87qkj, resource: bindings, ignored listing per whitelist Feb 1 10:50:11.587: INFO: namespace e2e-tests-projected-87qkj deletion completed in 7.184997046s • [SLOW TEST:18.472 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:50:11.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 10:50:11.788: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-tb4jq" to be "success or failure" Feb 1 10:50:11.814: INFO: Pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.185453ms Feb 1 10:50:13.836: INFO: Pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047637458s Feb 1 10:50:15.927: INFO: Pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138982767s Feb 1 10:50:17.943: INFO: Pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155057619s Feb 1 10:50:19.992: INFO: Pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203472975s Feb 1 10:50:22.000: INFO: Pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.212119745s STEP: Saw pod success Feb 1 10:50:22.000: INFO: Pod "downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:50:22.007: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 10:50:22.260: INFO: Waiting for pod downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005 to disappear Feb 1 10:50:22.280: INFO: Pod downwardapi-volume-9eea80a7-44e0-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:50:22.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tb4jq" for this suite. Feb 1 10:50:28.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:50:28.542: INFO: namespace: e2e-tests-downward-api-tb4jq, resource: bindings, ignored listing per whitelist Feb 1 10:50:28.542: INFO: namespace e2e-tests-downward-api-tb4jq deletion completed in 6.254358419s • [SLOW TEST:16.954 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:50:28.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 10:50:28.869: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:50:39.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6hwjn" for this suite. Feb 1 10:51:21.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:51:21.736: INFO: namespace: e2e-tests-pods-6hwjn, resource: bindings, ignored listing per whitelist Feb 1 10:51:21.771: INFO: namespace e2e-tests-pods-6hwjn deletion completed in 42.357657997s • [SLOW TEST:53.229 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:51:21.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 10:51:21.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-hwxhk" to be "success or failure" Feb 1 10:51:21.979: INFO: Pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.188622ms Feb 1 10:51:23.993: INFO: Pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054861849s Feb 1 10:51:26.010: INFO: Pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071328175s Feb 1 10:51:28.029: INFO: Pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090246965s Feb 1 10:51:30.043: INFO: Pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104099348s Feb 1 10:51:32.062: INFO: Pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123081821s STEP: Saw pod success Feb 1 10:51:32.062: INFO: Pod "downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:51:32.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 10:51:32.286: INFO: Waiting for pod downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005 to disappear Feb 1 10:51:32.320: INFO: Pod downwardapi-volume-c8bad434-44e0-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:51:32.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hwxhk" for this suite. Feb 1 10:51:38.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:51:38.557: INFO: namespace: e2e-tests-downward-api-hwxhk, resource: bindings, ignored listing per whitelist Feb 1 10:51:38.703: INFO: namespace e2e-tests-downward-api-hwxhk deletion completed in 6.371871101s • [SLOW TEST:16.932 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:51:38.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 10:51:38.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-6xt4v" to be "success or failure" Feb 1 10:51:38.999: INFO: Pod "downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.525973ms Feb 1 10:51:41.018: INFO: Pod "downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033011171s Feb 1 10:51:43.034: INFO: Pod "downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048233684s Feb 1 10:51:45.055: INFO: Pod "downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069878969s Feb 1 10:51:47.082: INFO: Pod "downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097125704s STEP: Saw pod success Feb 1 10:51:47.083: INFO: Pod "downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:51:47.106: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 10:51:47.241: INFO: Waiting for pod downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005 to disappear Feb 1 10:51:47.350: INFO: Pod downwardapi-volume-d2e3419e-44e0-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:51:47.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6xt4v" for this suite. Feb 1 10:51:53.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:51:53.655: INFO: namespace: e2e-tests-downward-api-6xt4v, resource: bindings, ignored listing per whitelist Feb 1 10:51:53.727: INFO: namespace e2e-tests-downward-api-6xt4v deletion completed in 6.360932915s • [SLOW TEST:15.024 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:51:53.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-lpw52/configmap-test-dbd599fb-44e0-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 10:51:54.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-lpw52" to be "success or failure" Feb 1 10:51:54.110: INFO: Pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.708562ms Feb 1 10:51:56.161: INFO: Pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064241051s Feb 1 10:51:59.210: INFO: Pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.112930516s Feb 1 10:52:01.244: INFO: Pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.14752393s Feb 1 10:52:03.275: INFO: Pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.178077964s Feb 1 10:52:05.292: INFO: Pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.195249318s STEP: Saw pod success Feb 1 10:52:05.292: INFO: Pod "pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:52:05.300: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005 container env-test: STEP: delete the pod Feb 1 10:52:05.577: INFO: Waiting for pod pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005 to disappear Feb 1 10:52:05.609: INFO: Pod pod-configmaps-dbe41e50-44e0-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:52:05.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lpw52" for this suite. Feb 1 10:52:11.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:52:11.887: INFO: namespace: e2e-tests-configmap-lpw52, resource: bindings, ignored listing per whitelist Feb 1 10:52:11.937: INFO: namespace e2e-tests-configmap-lpw52 deletion completed in 6.318472226s • [SLOW TEST:18.210 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:52:11.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0201 10:52:14.324353 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 10:52:14.324: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:52:14.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-n4gch" for this suite. Feb 1 10:52:20.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:52:20.754: INFO: namespace: e2e-tests-gc-n4gch, resource: bindings, ignored listing per whitelist Feb 1 10:52:20.764: INFO: namespace e2e-tests-gc-n4gch deletion completed in 6.379893021s • [SLOW TEST:8.826 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:52:20.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 10:52:20.957: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 1 10:52:20.981: INFO: Number of nodes with available pods: 0 Feb 1 10:52:20.981: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 1 10:52:21.134: INFO: Number of nodes with available pods: 0 Feb 1 10:52:21.134: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:22.378: INFO: Number of nodes with available pods: 0 Feb 1 10:52:22.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:23.149: INFO: Number of nodes with available pods: 0 Feb 1 10:52:23.149: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:24.157: INFO: Number of nodes with available pods: 0 Feb 1 10:52:24.157: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:25.145: INFO: Number of nodes with available pods: 0 Feb 1 10:52:25.146: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:26.576: INFO: Number of nodes with available pods: 0 Feb 1 10:52:26.576: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:27.164: INFO: Number of nodes with available pods: 0 Feb 1 10:52:27.164: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:28.152: INFO: Number of nodes with available pods: 0 Feb 1 10:52:28.152: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:29.152: INFO: Number of nodes with available pods: 1 Feb 1 10:52:29.152: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 1 10:52:29.369: INFO: Number of nodes with available pods: 1 Feb 1 10:52:29.369: INFO: Number of running nodes: 0, number of available pods: 1 Feb 1 10:52:30.386: INFO: Number of nodes with available pods: 0 Feb 1 10:52:30.386: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 1 10:52:30.421: INFO: Number of nodes with available pods: 0 Feb 1 10:52:30.421: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:31.437: INFO: Number of nodes with available pods: 0 Feb 1 10:52:31.437: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:32.436: INFO: Number of nodes with available pods: 0 Feb 1 10:52:32.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:33.434: INFO: Number of nodes with available pods: 0 Feb 1 10:52:33.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:34.438: INFO: Number of nodes with available pods: 0 Feb 1 10:52:34.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:35.451: INFO: Number of nodes with available pods: 0 Feb 1 10:52:35.451: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:36.504: INFO: Number of nodes with available pods: 0 Feb 1 10:52:36.504: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:37.445: INFO: Number of nodes with available pods: 0 Feb 1 10:52:37.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:38.443: INFO: Number of nodes with available pods: 0 Feb 1 10:52:38.443: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:39.452: INFO: Number of nodes with available pods: 0 Feb 1 10:52:39.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:40.438: INFO: Number of nodes with available pods: 0 Feb 1 10:52:40.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:41.433: INFO: Number of nodes with available pods: 0 Feb 1 10:52:41.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:42.461: INFO: Number of nodes with available pods: 0 Feb 1 10:52:42.461: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:43.465: INFO: Number of nodes with available pods: 0 Feb 1 10:52:43.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:44.445: INFO: Number of nodes with available pods: 0 Feb 1 10:52:44.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:45.434: INFO: Number of nodes with available pods: 0 Feb 1 10:52:45.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:46.451: INFO: Number of nodes with available pods: 0 Feb 1 10:52:46.451: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:47.485: INFO: Number of nodes with available pods: 0 Feb 1 10:52:47.485: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:48.478: INFO: Number of nodes with available pods: 0 Feb 1 10:52:48.478: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:49.435: INFO: Number of nodes with available pods: 0 Feb 1 10:52:49.435: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:50.492: INFO: Number of nodes with available pods: 0 Feb 1 10:52:50.492: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 10:52:51.447: INFO: Number of nodes with available pods: 1 Feb 1 10:52:51.447: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5dsg6, will wait for the garbage collector to delete the pods Feb 1 10:52:51.586: INFO: Deleting DaemonSet.extensions daemon-set took: 51.20057ms Feb 1 10:52:51.887: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.481472ms Feb 1 10:53:02.660: INFO: Number of nodes with available pods: 0 Feb 1 10:53:02.660: INFO: Number of running nodes: 0, number of available pods: 0 Feb 1 10:53:02.685: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5dsg6/daemonsets","resourceVersion":"20181128"},"items":null} Feb 1 10:53:02.754: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5dsg6/pods","resourceVersion":"20181129"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:53:03.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-5dsg6" for this suite. Feb 1 10:53:11.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:53:11.444: INFO: namespace: e2e-tests-daemonsets-5dsg6, resource: bindings, ignored listing per whitelist Feb 1 10:53:11.610: INFO: namespace e2e-tests-daemonsets-5dsg6 deletion completed in 8.373591911s • [SLOW TEST:50.846 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:53:11.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:53:19.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-lpjkn" for this suite. Feb 1 10:54:01.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:54:02.018: INFO: namespace: e2e-tests-kubelet-test-lpjkn, resource: bindings, ignored listing per whitelist Feb 1 10:54:02.107: INFO: namespace e2e-tests-kubelet-test-lpjkn deletion completed in 42.152000369s • [SLOW TEST:50.496 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:54:02.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-285f6bf6-44e1-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 10:54:02.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-mr9s7" to be "success or failure" Feb 1 10:54:02.537: INFO: Pod "pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.856093ms Feb 1 10:54:04.662: INFO: Pod "pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161173451s Feb 1 10:54:06.673: INFO: Pod "pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171940349s Feb 1 10:54:08.699: INFO: Pod "pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198067711s Feb 1 10:54:10.722: INFO: Pod "pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.220404721s STEP: Saw pod success Feb 1 10:54:10.722: INFO: Pod "pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:54:10.742: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 1 10:54:12.406: INFO: Waiting for pod pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005 to disappear Feb 1 10:54:12.417: INFO: Pod pod-configmaps-286c0a88-44e1-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:54:12.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mr9s7" for this suite. Feb 1 10:54:18.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:54:18.767: INFO: namespace: e2e-tests-configmap-mr9s7, resource: bindings, ignored listing per whitelist Feb 1 10:54:18.927: INFO: namespace e2e-tests-configmap-mr9s7 deletion completed in 6.500310827s • [SLOW TEST:16.820 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:54:18.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 1 10:54:19.420: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:54:40.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-g425c" for this suite. Feb 1 10:55:20.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:55:20.411: INFO: namespace: e2e-tests-init-container-g425c, resource: bindings, ignored listing per whitelist Feb 1 10:55:20.793: INFO: namespace e2e-tests-init-container-g425c deletion completed in 40.585482472s • [SLOW TEST:61.866 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:55:20.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xj5cr.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj5cr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xj5cr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xj5cr.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj5cr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xj5cr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 1 10:55:37.215: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.221: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.225: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.234: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.241: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.245: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.250: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj5cr.svc.cluster.local from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.256: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.263: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.268: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-573e0192-44e1-11ea-a88d-0242ac110005) Feb 1 10:55:37.268: INFO: Lookups using e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xj5cr.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 1 10:55:42.379: INFO: DNS probes using e2e-tests-dns-xj5cr/dns-test-573e0192-44e1-11ea-a88d-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:55:42.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-xj5cr" for this suite. Feb 1 10:55:50.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:55:50.697: INFO: namespace: e2e-tests-dns-xj5cr, resource: bindings, ignored listing per whitelist Feb 1 10:55:51.528: INFO: namespace e2e-tests-dns-xj5cr deletion completed in 9.067496522s • [SLOW TEST:30.735 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:55:51.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 10:55:51.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-8lrdr' Feb 1 10:55:52.008: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 10:55:52.008: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 1 10:55:56.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-8lrdr' Feb 1 10:55:56.228: INFO: stderr: "" Feb 1 10:55:56.228: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:55:56.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8lrdr" for this suite. Feb 1 10:56:20.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:56:20.503: INFO: namespace: e2e-tests-kubectl-8lrdr, resource: bindings, ignored listing per whitelist Feb 1 10:56:20.710: INFO: namespace e2e-tests-kubectl-8lrdr deletion completed in 24.475286409s • [SLOW TEST:29.182 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:56:20.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 10:56:20.860: INFO: Creating deployment "test-recreate-deployment" Feb 1 10:56:20.870: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 1 10:56:20.878: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 1 10:56:22.920: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 1 10:56:22.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151381, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 10:56:24.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151381, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 10:56:26.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151381, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716151380, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 1 10:56:28.947: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 1 10:56:28.965: INFO: Updating deployment test-recreate-deployment Feb 1 10:56:28.965: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 1 10:56:29.706: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-mglpd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mglpd/deployments/test-recreate-deployment,UID:7ae8c78f-44e1-11ea-a994-fa163e34d433,ResourceVersion:20181590,Generation:2,CreationTimestamp:2020-02-01 10:56:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-01 10:56:29 +0000 UTC 2020-02-01 10:56:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-01 10:56:29 +0000 UTC 2020-02-01 10:56:20 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 1 10:56:29.729: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-mglpd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mglpd/replicasets/test-recreate-deployment-589c4bfd,UID:7fe1d733-44e1-11ea-a994-fa163e34d433,ResourceVersion:20181587,Generation:1,CreationTimestamp:2020-02-01 10:56:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 7ae8c78f-44e1-11ea-a994-fa163e34d433 0xc00065064f 0xc000650660}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 1 10:56:29.730: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 1 10:56:29.730: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-mglpd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mglpd/replicasets/test-recreate-deployment-5bf7f65dc,UID:7af29c52-44e1-11ea-a994-fa163e34d433,ResourceVersion:20181578,Generation:2,CreationTimestamp:2020-02-01 10:56:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 7ae8c78f-44e1-11ea-a994-fa163e34d433 0xc000650730 0xc000650731}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 1 10:56:29.847: INFO: Pod "test-recreate-deployment-589c4bfd-7vx7z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-7vx7z,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-mglpd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mglpd/pods/test-recreate-deployment-589c4bfd-7vx7z,UID:7fe34200-44e1-11ea-a994-fa163e34d433,ResourceVersion:20181591,Generation:0,CreationTimestamp:2020-02-01 10:56:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 7fe1d733-44e1-11ea-a994-fa163e34d433 0xc00118549f 0xc0011854b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-d7gsw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d7gsw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-d7gsw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001185ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001185ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 10:56:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 10:56:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 10:56:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 10:56:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 10:56:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:56:29.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-mglpd" for this suite. Feb 1 10:56:41.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:56:41.933: INFO: namespace: e2e-tests-deployment-mglpd, resource: bindings, ignored listing per whitelist Feb 1 10:56:42.026: INFO: namespace e2e-tests-deployment-mglpd deletion completed in 12.161493528s • [SLOW TEST:21.315 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:56:42.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-87a190b3-44e1-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume secrets Feb 1 10:56:42.242: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-v5x2n" to be "success or failure" Feb 1 10:56:42.260: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.379912ms Feb 1 10:56:44.290: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047784713s Feb 1 10:56:46.309: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067071898s Feb 1 10:56:48.507: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264590531s Feb 1 10:56:50.537: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294596208s Feb 1 10:56:52.583: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.340885327s Feb 1 10:56:54.619: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.377376892s STEP: Saw pod success Feb 1 10:56:54.619: INFO: Pod "pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:56:54.626: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 1 10:56:54.836: INFO: Waiting for pod pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005 to disappear Feb 1 10:56:54.920: INFO: Pod pod-projected-secrets-87a40104-44e1-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:56:54.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v5x2n" for this suite. Feb 1 10:57:00.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:57:01.070: INFO: namespace: e2e-tests-projected-v5x2n, resource: bindings, ignored listing per whitelist Feb 1 10:57:01.275: INFO: namespace e2e-tests-projected-v5x2n deletion completed in 6.324004206s • [SLOW TEST:19.249 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:57:01.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 1 10:57:01.541: INFO: Waiting up to 5m0s for pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-bt9v6" to be "success or failure" Feb 1 10:57:01.622: INFO: Pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 80.977837ms Feb 1 10:57:03.636: INFO: Pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095016249s Feb 1 10:57:05.648: INFO: Pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107381647s Feb 1 10:57:07.675: INFO: Pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134758588s Feb 1 10:57:09.700: INFO: Pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159339146s Feb 1 10:57:12.223: INFO: Pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.682339396s STEP: Saw pod success Feb 1 10:57:12.223: INFO: Pod "downward-api-93261334-44e1-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 10:57:12.239: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-93261334-44e1-11ea-a88d-0242ac110005 container dapi-container: STEP: delete the pod Feb 1 10:57:12.643: INFO: Waiting for pod downward-api-93261334-44e1-11ea-a88d-0242ac110005 to disappear Feb 1 10:57:12.659: INFO: Pod downward-api-93261334-44e1-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:57:12.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bt9v6" for this suite. Feb 1 10:57:18.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:57:18.892: INFO: namespace: e2e-tests-downward-api-bt9v6, resource: bindings, ignored listing per whitelist Feb 1 10:57:19.036: INFO: namespace e2e-tests-downward-api-bt9v6 deletion completed in 6.358141152s • [SLOW TEST:17.761 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:57:19.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fskq8 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 1 10:57:19.420: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 1 10:57:57.774: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-fskq8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 10:57:57.774: INFO: >>> kubeConfig: /root/.kube/config I0201 10:57:57.882403 8 log.go:172] (0xc001000580) (0xc001ada1e0) Create stream I0201 10:57:57.882533 8 log.go:172] (0xc001000580) (0xc001ada1e0) Stream added, broadcasting: 1 I0201 10:57:57.890249 8 log.go:172] (0xc001000580) Reply frame received for 1 I0201 10:57:57.890296 8 log.go:172] (0xc001000580) (0xc001ada280) Create stream I0201 10:57:57.890306 8 log.go:172] (0xc001000580) (0xc001ada280) Stream added, broadcasting: 3 I0201 10:57:57.891914 8 log.go:172] (0xc001000580) Reply frame received for 3 I0201 10:57:57.891965 8 log.go:172] (0xc001000580) (0xc001f6e000) Create stream I0201 10:57:57.891977 8 log.go:172] (0xc001000580) (0xc001f6e000) Stream added, broadcasting: 5 I0201 10:57:57.894749 8 log.go:172] (0xc001000580) Reply frame received for 5 I0201 10:57:59.064949 8 log.go:172] (0xc001000580) Data frame received for 3 I0201 10:57:59.065007 8 log.go:172] (0xc001ada280) (3) Data frame handling I0201 10:57:59.065037 8 log.go:172] (0xc001ada280) (3) Data frame sent I0201 10:57:59.269762 8 log.go:172] (0xc001000580) (0xc001f6e000) Stream removed, broadcasting: 5 I0201 10:57:59.269985 8 log.go:172] (0xc001000580) Data frame received for 1 I0201 10:57:59.270027 8 log.go:172] (0xc001000580) (0xc001ada280) Stream removed, broadcasting: 3 I0201 10:57:59.270191 8 log.go:172] (0xc001ada1e0) (1) Data frame handling I0201 10:57:59.270239 8 log.go:172] (0xc001ada1e0) (1) Data frame sent I0201 10:57:59.270279 8 log.go:172] (0xc001000580) (0xc001ada1e0) Stream removed, broadcasting: 1 I0201 10:57:59.270316 8 log.go:172] (0xc001000580) Go away received I0201 10:57:59.270581 8 log.go:172] (0xc001000580) (0xc001ada1e0) Stream removed, broadcasting: 1 I0201 10:57:59.270597 8 log.go:172] (0xc001000580) (0xc001ada280) Stream removed, broadcasting: 3 I0201 10:57:59.270611 8 log.go:172] (0xc001000580) (0xc001f6e000) Stream removed, broadcasting: 5 Feb 1 10:57:59.270: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:57:59.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-fskq8" for this suite. Feb 1 10:58:23.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:58:23.481: INFO: namespace: e2e-tests-pod-network-test-fskq8, resource: bindings, ignored listing per whitelist Feb 1 10:58:23.693: INFO: namespace e2e-tests-pod-network-test-fskq8 deletion completed in 24.304274733s • [SLOW TEST:64.656 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:58:23.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 1 10:58:23.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pll62' Feb 1 10:58:26.264: INFO: stderr: "" Feb 1 10:58:26.264: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 1 10:58:27.321: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:27.321: INFO: Found 0 / 1 Feb 1 10:58:28.334: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:28.334: INFO: Found 0 / 1 Feb 1 10:58:29.291: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:29.291: INFO: Found 0 / 1 Feb 1 10:58:30.281: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:30.281: INFO: Found 0 / 1 Feb 1 10:58:31.285: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:31.285: INFO: Found 0 / 1 Feb 1 10:58:32.373: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:32.373: INFO: Found 0 / 1 Feb 1 10:58:33.290: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:33.290: INFO: Found 0 / 1 Feb 1 10:58:34.292: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:34.292: INFO: Found 1 / 1 Feb 1 10:58:34.292: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 1 10:58:34.302: INFO: Selector matched 1 pods for map[app:redis] Feb 1 10:58:34.302: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 1 10:58:34.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d2jlx redis-master --namespace=e2e-tests-kubectl-pll62' Feb 1 10:58:34.513: INFO: stderr: "" Feb 1 10:58:34.513: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Feb 10:58:32.865 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Feb 10:58:32.865 # Server started, Redis version 3.2.12\n1:M 01 Feb 10:58:32.865 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Feb 10:58:32.865 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 1 10:58:34.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d2jlx redis-master --namespace=e2e-tests-kubectl-pll62 --tail=1' Feb 1 10:58:34.676: INFO: stderr: "" Feb 1 10:58:34.676: INFO: stdout: "1:M 01 Feb 10:58:32.865 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 1 10:58:34.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d2jlx redis-master --namespace=e2e-tests-kubectl-pll62 --limit-bytes=1' Feb 1 10:58:34.879: INFO: stderr: "" Feb 1 10:58:34.879: INFO: stdout: " " STEP: exposing timestamps Feb 1 10:58:34.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d2jlx redis-master --namespace=e2e-tests-kubectl-pll62 --tail=1 --timestamps' Feb 1 10:58:35.015: INFO: stderr: "" Feb 1 10:58:35.015: INFO: stdout: "2020-02-01T10:58:32.867914208Z 1:M 01 Feb 10:58:32.865 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 1 10:58:37.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d2jlx redis-master --namespace=e2e-tests-kubectl-pll62 --since=1s' Feb 1 10:58:37.864: INFO: stderr: "" Feb 1 10:58:37.864: INFO: stdout: "" Feb 1 10:58:37.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d2jlx redis-master --namespace=e2e-tests-kubectl-pll62 --since=24h' Feb 1 10:58:38.031: INFO: stderr: "" Feb 1 10:58:38.031: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Feb 10:58:32.865 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Feb 10:58:32.865 # Server started, Redis version 3.2.12\n1:M 01 Feb 10:58:32.865 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Feb 10:58:32.865 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 1 10:58:38.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pll62' Feb 1 10:58:38.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 10:58:38.161: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 1 10:58:38.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-pll62' Feb 1 10:58:38.306: INFO: stderr: "No resources found.\n" Feb 1 10:58:38.306: INFO: stdout: "" Feb 1 10:58:38.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-pll62 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 1 10:58:38.593: INFO: stderr: "" Feb 1 10:58:38.594: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 10:58:38.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pll62" for this suite. Feb 1 10:59:02.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 10:59:02.751: INFO: namespace: e2e-tests-kubectl-pll62, resource: bindings, ignored listing per whitelist Feb 1 10:59:02.850: INFO: namespace e2e-tests-kubectl-pll62 deletion completed in 24.238689724s • [SLOW TEST:39.157 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 10:59:02.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-ls29b Feb 1 10:59:13.179: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-ls29b STEP: checking the pod's current state and verifying that restartCount is present Feb 1 10:59:13.190: INFO: Initial restart count of pod liveness-exec is 0 Feb 1 11:00:10.264: INFO: Restart count of pod e2e-tests-container-probe-ls29b/liveness-exec is now 1 (57.073686837s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:00:10.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ls29b" for this suite. Feb 1 11:00:16.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:00:16.755: INFO: namespace: e2e-tests-container-probe-ls29b, resource: bindings, ignored listing per whitelist Feb 1 11:00:16.846: INFO: namespace e2e-tests-container-probe-ls29b deletion completed in 6.458178544s • [SLOW TEST:73.996 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:00:16.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 1 11:00:17.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-g7qth run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 1 11:00:27.817: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0201 11:00:26.514797 454 log.go:172] (0xc000902370) (0xc0005f0780) Create stream\nI0201 11:00:26.515403 454 log.go:172] (0xc000902370) (0xc0005f0780) Stream added, broadcasting: 1\nI0201 11:00:26.529145 454 log.go:172] (0xc000902370) Reply frame received for 1\nI0201 11:00:26.529381 454 log.go:172] (0xc000902370) (0xc0005f0820) Create stream\nI0201 11:00:26.529423 454 log.go:172] (0xc000902370) (0xc0005f0820) Stream added, broadcasting: 3\nI0201 11:00:26.532442 454 log.go:172] (0xc000902370) Reply frame received for 3\nI0201 11:00:26.532743 454 log.go:172] (0xc000902370) (0xc0005efa40) Create stream\nI0201 11:00:26.532759 454 log.go:172] (0xc000902370) (0xc0005efa40) Stream added, broadcasting: 5\nI0201 11:00:26.535558 454 log.go:172] (0xc000902370) Reply frame received for 5\nI0201 11:00:26.535628 454 log.go:172] (0xc000902370) (0xc0005f08c0) Create stream\nI0201 11:00:26.535656 454 log.go:172] (0xc000902370) (0xc0005f08c0) Stream added, broadcasting: 7\nI0201 11:00:26.537560 454 log.go:172] (0xc000902370) Reply frame received for 7\nI0201 11:00:26.538465 454 log.go:172] (0xc0005f0820) (3) Writing data frame\nI0201 11:00:26.539571 454 log.go:172] (0xc0005f0820) (3) Writing data frame\nI0201 11:00:26.562184 454 log.go:172] (0xc000902370) Data frame received for 5\nI0201 11:00:26.562225 454 log.go:172] (0xc0005efa40) (5) Data frame handling\nI0201 11:00:26.562283 454 log.go:172] (0xc0005efa40) (5) Data frame sent\nI0201 11:00:26.569453 454 log.go:172] (0xc000902370) Data frame received for 5\nI0201 11:00:26.569475 454 log.go:172] (0xc0005efa40) (5) Data frame handling\nI0201 11:00:26.569489 454 log.go:172] (0xc0005efa40) (5) Data frame sent\nI0201 11:00:27.743627 454 log.go:172] (0xc000902370) (0xc0005f08c0) Stream removed, broadcasting: 7\nI0201 11:00:27.743877 454 log.go:172] (0xc000902370) Data frame received for 1\nI0201 11:00:27.743901 454 log.go:172] (0xc0005f0780) (1) Data frame handling\nI0201 11:00:27.743941 454 log.go:172] (0xc0005f0780) (1) Data frame sent\nI0201 11:00:27.744005 454 log.go:172] (0xc000902370) (0xc0005f0820) Stream removed, broadcasting: 3\nI0201 11:00:27.744170 454 log.go:172] (0xc000902370) (0xc0005efa40) Stream removed, broadcasting: 5\nI0201 11:00:27.744236 454 log.go:172] (0xc000902370) (0xc0005f0780) Stream removed, broadcasting: 1\nI0201 11:00:27.744273 454 log.go:172] (0xc000902370) Go away received\nI0201 11:00:27.744627 454 log.go:172] (0xc000902370) (0xc0005f0780) Stream removed, broadcasting: 1\nI0201 11:00:27.744644 454 log.go:172] (0xc000902370) (0xc0005f0820) Stream removed, broadcasting: 3\nI0201 11:00:27.744652 454 log.go:172] (0xc000902370) (0xc0005efa40) Stream removed, broadcasting: 5\nI0201 11:00:27.744659 454 log.go:172] (0xc000902370) (0xc0005f08c0) Stream removed, broadcasting: 7\n" Feb 1 11:00:27.817: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:00:30.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g7qth" for this suite. Feb 1 11:00:36.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:00:36.396: INFO: namespace: e2e-tests-kubectl-g7qth, resource: bindings, ignored listing per whitelist Feb 1 11:00:36.403: INFO: namespace e2e-tests-kubectl-g7qth deletion completed in 6.359766103s • [SLOW TEST:19.557 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:00:36.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tjjxx STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 1 11:00:36.626: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 1 11:01:08.852: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-tjjxx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:01:08.852: INFO: >>> kubeConfig: /root/.kube/config I0201 11:01:08.912629 8 log.go:172] (0xc0008b02c0) (0xc0015b2500) Create stream I0201 11:01:08.912771 8 log.go:172] (0xc0008b02c0) (0xc0015b2500) Stream added, broadcasting: 1 I0201 11:01:08.921541 8 log.go:172] (0xc0008b02c0) Reply frame received for 1 I0201 11:01:08.921565 8 log.go:172] (0xc0008b02c0) (0xc000b8b400) Create stream I0201 11:01:08.921578 8 log.go:172] (0xc0008b02c0) (0xc000b8b400) Stream added, broadcasting: 3 I0201 11:01:08.923500 8 log.go:172] (0xc0008b02c0) Reply frame received for 3 I0201 11:01:08.923544 8 log.go:172] (0xc0008b02c0) (0xc0015b25a0) Create stream I0201 11:01:08.923559 8 log.go:172] (0xc0008b02c0) (0xc0015b25a0) Stream added, broadcasting: 5 I0201 11:01:08.926523 8 log.go:172] (0xc0008b02c0) Reply frame received for 5 I0201 11:01:09.220503 8 log.go:172] (0xc0008b02c0) Data frame received for 3 I0201 11:01:09.220539 8 log.go:172] (0xc000b8b400) (3) Data frame handling I0201 11:01:09.220558 8 log.go:172] (0xc000b8b400) (3) Data frame sent I0201 11:01:09.375906 8 log.go:172] (0xc0008b02c0) (0xc000b8b400) Stream removed, broadcasting: 3 I0201 11:01:09.376087 8 log.go:172] (0xc0008b02c0) Data frame received for 1 I0201 11:01:09.376134 8 log.go:172] (0xc0015b2500) (1) Data frame handling I0201 11:01:09.376158 8 log.go:172] (0xc0015b2500) (1) Data frame sent I0201 11:01:09.376175 8 log.go:172] (0xc0008b02c0) (0xc0015b25a0) Stream removed, broadcasting: 5 I0201 11:01:09.376241 8 log.go:172] (0xc0008b02c0) (0xc0015b2500) Stream removed, broadcasting: 1 I0201 11:01:09.376287 8 log.go:172] (0xc0008b02c0) Go away received I0201 11:01:09.376529 8 log.go:172] (0xc0008b02c0) (0xc0015b2500) Stream removed, broadcasting: 1 I0201 11:01:09.376570 8 log.go:172] (0xc0008b02c0) (0xc000b8b400) Stream removed, broadcasting: 3 I0201 11:01:09.376597 8 log.go:172] (0xc0008b02c0) (0xc0015b25a0) Stream removed, broadcasting: 5 Feb 1 11:01:09.376: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:01:09.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-tjjxx" for this suite. Feb 1 11:01:35.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:01:35.546: INFO: namespace: e2e-tests-pod-network-test-tjjxx, resource: bindings, ignored listing per whitelist Feb 1 11:01:35.629: INFO: namespace e2e-tests-pod-network-test-tjjxx deletion completed in 26.215302405s • [SLOW TEST:59.226 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:01:35.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 11:01:46.179: INFO: Waiting up to 5m0s for pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005" in namespace "e2e-tests-pods-brtxp" to be "success or failure" Feb 1 11:01:46.210: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.482821ms Feb 1 11:01:48.321: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142518921s Feb 1 11:01:50.332: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153079481s Feb 1 11:01:52.577: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398333166s Feb 1 11:01:54.608: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429608005s Feb 1 11:01:56.642: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.463464871s Feb 1 11:01:58.678: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.499639322s STEP: Saw pod success Feb 1 11:01:58.678: INFO: Pod "client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:01:58.684: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005 container env3cont: STEP: delete the pod Feb 1 11:01:58.816: INFO: Waiting for pod client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005 to disappear Feb 1 11:01:58.826: INFO: Pod client-envvars-3cccd892-44e2-11ea-a88d-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:01:58.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-brtxp" for this suite. Feb 1 11:02:42.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:02:42.958: INFO: namespace: e2e-tests-pods-brtxp, resource: bindings, ignored listing per whitelist Feb 1 11:02:42.968: INFO: namespace e2e-tests-pods-brtxp deletion completed in 44.133646177s • [SLOW TEST:67.338 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:02:42.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5ee56568-44e2-11ea-a88d-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5ee56568-44e2-11ea-a88d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:02:53.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j6xbp" for this suite. Feb 1 11:03:17.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:03:17.783: INFO: namespace: e2e-tests-projected-j6xbp, resource: bindings, ignored listing per whitelist Feb 1 11:03:17.870: INFO: namespace e2e-tests-projected-j6xbp deletion completed in 24.217909653s • [SLOW TEST:34.902 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:03:17.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-73a76f8e-44e2-11ea-a88d-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-73a76fe2-44e2-11ea-a88d-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-73a76f8e-44e2-11ea-a88d-0242ac110005 STEP: Updating configmap cm-test-opt-upd-73a76fe2-44e2-11ea-a88d-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-73a77021-44e2-11ea-a88d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:03:34.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r8bn5" for this suite. Feb 1 11:03:58.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:03:58.875: INFO: namespace: e2e-tests-projected-r8bn5, resource: bindings, ignored listing per whitelist Feb 1 11:03:59.071: INFO: namespace e2e-tests-projected-r8bn5 deletion completed in 24.343636809s • [SLOW TEST:41.199 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:03:59.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 1 11:03:59.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:03:59.784: INFO: stderr: "" Feb 1 11:03:59.784: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 1 11:03:59.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:00.086: INFO: stderr: "" Feb 1 11:04:00.086: INFO: stdout: "update-demo-nautilus-bzdff update-demo-nautilus-fqrdt " Feb 1 11:04:00.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzdff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:00.260: INFO: stderr: "" Feb 1 11:04:00.260: INFO: stdout: "" Feb 1 11:04:00.260: INFO: update-demo-nautilus-bzdff is created but not running Feb 1 11:04:05.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:05.851: INFO: stderr: "" Feb 1 11:04:05.851: INFO: stdout: "update-demo-nautilus-bzdff update-demo-nautilus-fqrdt " Feb 1 11:04:05.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzdff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:06.335: INFO: stderr: "" Feb 1 11:04:06.335: INFO: stdout: "" Feb 1 11:04:06.335: INFO: update-demo-nautilus-bzdff is created but not running Feb 1 11:04:11.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:11.544: INFO: stderr: "" Feb 1 11:04:11.544: INFO: stdout: "update-demo-nautilus-bzdff update-demo-nautilus-fqrdt " Feb 1 11:04:11.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzdff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:11.663: INFO: stderr: "" Feb 1 11:04:11.663: INFO: stdout: "true" Feb 1 11:04:11.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzdff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:11.830: INFO: stderr: "" Feb 1 11:04:11.830: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 11:04:11.830: INFO: validating pod update-demo-nautilus-bzdff Feb 1 11:04:11.849: INFO: got data: { "image": "nautilus.jpg" } Feb 1 11:04:11.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 11:04:11.849: INFO: update-demo-nautilus-bzdff is verified up and running Feb 1 11:04:11.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqrdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:12.058: INFO: stderr: "" Feb 1 11:04:12.058: INFO: stdout: "true" Feb 1 11:04:12.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqrdt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:12.256: INFO: stderr: "" Feb 1 11:04:12.256: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 11:04:12.256: INFO: validating pod update-demo-nautilus-fqrdt Feb 1 11:04:12.271: INFO: got data: { "image": "nautilus.jpg" } Feb 1 11:04:12.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 11:04:12.271: INFO: update-demo-nautilus-fqrdt is verified up and running STEP: using delete to clean up resources Feb 1 11:04:12.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:12.441: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 1 11:04:12.441: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 1 11:04:12.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-h7p9n' Feb 1 11:04:12.629: INFO: stderr: "No resources found.\n" Feb 1 11:04:12.629: INFO: stdout: "" Feb 1 11:04:12.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-h7p9n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 1 11:04:12.794: INFO: stderr: "" Feb 1 11:04:12.795: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:04:12.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h7p9n" for this suite. Feb 1 11:04:36.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:04:36.967: INFO: namespace: e2e-tests-kubectl-h7p9n, resource: bindings, ignored listing per whitelist Feb 1 11:04:37.056: INFO: namespace e2e-tests-kubectl-h7p9n deletion completed in 24.243000725s • [SLOW TEST:37.985 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:04:37.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 11:04:37.327: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-gjzdv" to be "success or failure" Feb 1 11:04:37.356: INFO: Pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.721044ms Feb 1 11:04:40.448: INFO: Pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.121568081s Feb 1 11:04:42.467: INFO: Pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.139946087s Feb 1 11:04:44.500: INFO: Pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.173168607s Feb 1 11:04:46.528: INFO: Pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.201134698s Feb 1 11:04:48.597: INFO: Pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.270135896s STEP: Saw pod success Feb 1 11:04:48.597: INFO: Pod "downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:04:48.616: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 11:04:48.781: INFO: Waiting for pod downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005 to disappear Feb 1 11:04:48.787: INFO: Pod downwardapi-volume-a2d06fd0-44e2-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:04:48.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gjzdv" for this suite. Feb 1 11:04:55.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:04:55.729: INFO: namespace: e2e-tests-projected-gjzdv, resource: bindings, ignored listing per whitelist Feb 1 11:04:55.897: INFO: namespace e2e-tests-projected-gjzdv deletion completed in 7.104195608s • [SLOW TEST:18.840 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:04:55.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ae07c91e-44e2-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 11:04:56.162: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-5rg64" to be "success or failure" Feb 1 11:04:56.193: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.268879ms Feb 1 11:04:58.748: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586672043s Feb 1 11:05:00.788: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.626251498s Feb 1 11:05:02.802: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.639813948s Feb 1 11:05:04.812: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.650411929s Feb 1 11:05:06.833: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.671593526s Feb 1 11:05:08.851: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.689510102s STEP: Saw pod success Feb 1 11:05:08.851: INFO: Pod "pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:05:08.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 1 11:05:08.989: INFO: Waiting for pod pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005 to disappear Feb 1 11:05:09.014: INFO: Pod pod-configmaps-ae0ba013-44e2-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:05:09.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5rg64" for this suite. Feb 1 11:05:15.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:05:15.460: INFO: namespace: e2e-tests-configmap-5rg64, resource: bindings, ignored listing per whitelist Feb 1 11:05:15.472: INFO: namespace e2e-tests-configmap-5rg64 deletion completed in 6.367672051s • [SLOW TEST:19.576 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:05:15.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-m8znf in namespace e2e-tests-proxy-l27c6 I0201 11:05:15.911081 8 runners.go:184] Created replication controller with name: proxy-service-m8znf, namespace: e2e-tests-proxy-l27c6, replica count: 1 I0201 11:05:16.962194 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:17.962695 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:18.963067 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:19.963457 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:20.964239 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:21.964672 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:22.965246 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:23.966135 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:05:24.966789 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:25.967213 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:26.967589 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:27.967914 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:28.968240 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:29.968533 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:30.969051 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:31.969451 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0201 11:05:32.969955 8 runners.go:184] proxy-service-m8znf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 1 11:05:33.034: INFO: Endpoint e2e-tests-proxy-l27c6/proxy-service-m8znf is not ready yet Feb 1 11:05:35.062: INFO: setup took 19.330500938s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 1 11:05:35.109: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-l27c6/pods/proxy-service-m8znf-gn4l6:162/proxy/: bar (200; 46.391993ms) Feb 1 11:05:35.109: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-l27c6/pods/http:proxy-service-m8znf-gn4l6:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-cd00f271-44e2-11ea-a88d-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-cd00f339-44e2-11ea-a88d-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cd00f271-44e2-11ea-a88d-0242ac110005 STEP: Updating configmap cm-test-opt-upd-cd00f339-44e2-11ea-a88d-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-cd00f3c2-44e2-11ea-a88d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:06:02.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fvxll" for this suite. Feb 1 11:06:26.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:06:26.795: INFO: namespace: e2e-tests-configmap-fvxll, resource: bindings, ignored listing per whitelist Feb 1 11:06:26.863: INFO: namespace e2e-tests-configmap-fvxll deletion completed in 24.293254267s • [SLOW TEST:38.982 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:06:26.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0201 11:06:57.747118 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 11:06:57.747: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:06:57.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7rnbc" for this suite. Feb 1 11:07:05.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:07:05.833: INFO: namespace: e2e-tests-gc-7rnbc, resource: bindings, ignored listing per whitelist Feb 1 11:07:05.904: INFO: namespace e2e-tests-gc-7rnbc deletion completed in 8.150618618s • [SLOW TEST:39.040 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:07:05.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 1 11:10:10.474: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:10.567: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:12.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:12.586: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:14.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:14.592: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:16.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:16.609: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:18.570: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:18.585: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:20.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:20.601: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:22.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:22.584: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:24.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:24.710: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:26.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:26.615: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:28.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:28.693: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:30.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:30.592: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:32.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:32.612: INFO: Pod pod-with-poststart-exec-hook still exists Feb 1 11:10:34.568: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 1 11:10:34.599: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:10:34.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4gkvd" for this suite. Feb 1 11:10:58.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:10:58.940: INFO: namespace: e2e-tests-container-lifecycle-hook-4gkvd, resource: bindings, ignored listing per whitelist Feb 1 11:10:58.959: INFO: namespace e2e-tests-container-lifecycle-hook-4gkvd deletion completed in 24.333076131s • [SLOW TEST:233.054 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:10:58.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 1 11:10:59.167: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:11:15.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-qv927" for this suite. Feb 1 11:11:22.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:11:22.208: INFO: namespace: e2e-tests-init-container-qv927, resource: bindings, ignored listing per whitelist Feb 1 11:11:22.417: INFO: namespace e2e-tests-init-container-qv927 deletion completed in 6.591066895s • [SLOW TEST:23.459 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:11:22.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 11:11:22.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 1 11:11:22.859: INFO: stderr: "" Feb 1 11:11:22.859: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 1 11:11:22.869: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:11:22.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tk4dk" for this suite. Feb 1 11:11:28.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:11:29.150: INFO: namespace: e2e-tests-kubectl-tk4dk, resource: bindings, ignored listing per whitelist Feb 1 11:11:29.290: INFO: namespace e2e-tests-kubectl-tk4dk deletion completed in 6.407579323s S [SKIPPING] [6.873 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 11:11:22.869: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:11:29.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 1 11:11:29.676: INFO: Waiting up to 5m0s for pod "pod-98978597-44e3-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-db95l" to be "success or failure" Feb 1 11:11:29.701: INFO: Pod "pod-98978597-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.691216ms Feb 1 11:11:31.710: INFO: Pod "pod-98978597-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034215238s Feb 1 11:11:33.734: INFO: Pod "pod-98978597-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058171981s Feb 1 11:11:35.789: INFO: Pod "pod-98978597-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112876033s Feb 1 11:11:37.804: INFO: Pod "pod-98978597-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12749447s Feb 1 11:11:40.272: INFO: Pod "pod-98978597-44e3-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.595891263s STEP: Saw pod success Feb 1 11:11:40.272: INFO: Pod "pod-98978597-44e3-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:11:40.282: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-98978597-44e3-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 11:11:40.711: INFO: Waiting for pod pod-98978597-44e3-11ea-a88d-0242ac110005 to disappear Feb 1 11:11:40.744: INFO: Pod pod-98978597-44e3-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:11:40.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-db95l" for this suite. Feb 1 11:11:46.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:11:47.109: INFO: namespace: e2e-tests-emptydir-db95l, resource: bindings, ignored listing per whitelist Feb 1 11:11:47.109: INFO: namespace e2e-tests-emptydir-db95l deletion completed in 6.340345108s • [SLOW TEST:17.817 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:11:47.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 1 11:11:58.022: INFO: Successfully updated pod "pod-update-a320cc04-44e3-11ea-a88d-0242ac110005" STEP: verifying the updated pod is in kubernetes Feb 1 11:11:58.053: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:11:58.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dxqcp" for this suite. Feb 1 11:12:22.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:12:22.253: INFO: namespace: e2e-tests-pods-dxqcp, resource: bindings, ignored listing per whitelist Feb 1 11:12:22.297: INFO: namespace e2e-tests-pods-dxqcp deletion completed in 24.237505721s • [SLOW TEST:35.188 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:12:22.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-b82ee755-44e3-11ea-a88d-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-b82ee6cb-44e3-11ea-a88d-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 1 11:12:22.804: INFO: Waiting up to 5m0s for pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-6s6sx" to be "success or failure" Feb 1 11:12:22.831: INFO: Pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.759884ms Feb 1 11:12:24.853: INFO: Pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048971198s Feb 1 11:12:26.899: INFO: Pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094926751s Feb 1 11:12:28.914: INFO: Pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10987086s Feb 1 11:12:31.215: INFO: Pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.410782847s Feb 1 11:12:33.238: INFO: Pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.433810028s STEP: Saw pod success Feb 1 11:12:33.238: INFO: Pod "projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:12:33.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005 container projected-all-volume-test: STEP: delete the pod Feb 1 11:12:33.468: INFO: Waiting for pod projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005 to disappear Feb 1 11:12:33.478: INFO: Pod projected-volume-b82ee5cb-44e3-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:12:33.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6s6sx" for this suite. Feb 1 11:12:39.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:12:39.681: INFO: namespace: e2e-tests-projected-6s6sx, resource: bindings, ignored listing per whitelist Feb 1 11:12:39.715: INFO: namespace e2e-tests-projected-6s6sx deletion completed in 6.229679586s • [SLOW TEST:17.418 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:12:39.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Feb 1 11:12:40.005: INFO: Waiting up to 5m0s for pod "client-containers-c2838461-44e3-11ea-a88d-0242ac110005" in namespace "e2e-tests-containers-bzkq4" to be "success or failure" Feb 1 11:12:40.015: INFO: Pod "client-containers-c2838461-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.233575ms Feb 1 11:12:42.022: INFO: Pod "client-containers-c2838461-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016958499s Feb 1 11:12:44.033: INFO: Pod "client-containers-c2838461-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027978131s Feb 1 11:12:46.046: INFO: Pod "client-containers-c2838461-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041355374s Feb 1 11:12:48.072: INFO: Pod "client-containers-c2838461-44e3-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06680884s STEP: Saw pod success Feb 1 11:12:48.072: INFO: Pod "client-containers-c2838461-44e3-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:12:48.077: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-c2838461-44e3-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 11:12:48.186: INFO: Waiting for pod client-containers-c2838461-44e3-11ea-a88d-0242ac110005 to disappear Feb 1 11:12:48.193: INFO: Pod client-containers-c2838461-44e3-11ea-a88d-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:12:48.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bzkq4" for this suite. Feb 1 11:12:54.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:12:54.666: INFO: namespace: e2e-tests-containers-bzkq4, resource: bindings, ignored listing per whitelist Feb 1 11:12:54.804: INFO: namespace e2e-tests-containers-bzkq4 deletion completed in 6.501118467s • [SLOW TEST:15.089 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:12:54.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 11:12:55.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-66d9q" to be "success or failure" Feb 1 11:12:55.022: INFO: Pod "downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.378818ms Feb 1 11:12:57.066: INFO: Pod "downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0486971s Feb 1 11:13:00.281: INFO: Pod "downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.264340737s Feb 1 11:13:02.303: INFO: Pod "downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.285873134s Feb 1 11:13:04.314: INFO: Pod "downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.297232405s STEP: Saw pod success Feb 1 11:13:04.314: INFO: Pod "downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:13:04.322: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 11:13:04.400: INFO: Waiting for pod downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005 to disappear Feb 1 11:13:04.417: INFO: Pod downwardapi-volume-cb6d59bc-44e3-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:13:04.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-66d9q" for this suite. Feb 1 11:13:11.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:13:11.616: INFO: namespace: e2e-tests-projected-66d9q, resource: bindings, ignored listing per whitelist Feb 1 11:13:11.772: INFO: namespace e2e-tests-projected-66d9q deletion completed in 6.485275015s • [SLOW TEST:16.967 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:13:11.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-d5c2c32f-44e3-11ea-a88d-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d5c2c32f-44e3-11ea-a88d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:14:40.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-892kj" for this suite. Feb 1 11:15:04.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:15:04.620: INFO: namespace: e2e-tests-configmap-892kj, resource: bindings, ignored listing per whitelist Feb 1 11:15:04.674: INFO: namespace e2e-tests-configmap-892kj deletion completed in 24.274024335s • [SLOW TEST:112.902 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:15:04.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-18dd3829-44e4-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume secrets Feb 1 11:15:04.939: INFO: Waiting up to 5m0s for pod "pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-2s6hd" to be "success or failure" Feb 1 11:15:04.953: INFO: Pod "pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.998324ms Feb 1 11:15:06.969: INFO: Pod "pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029517049s Feb 1 11:15:08.984: INFO: Pod "pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044168768s Feb 1 11:15:10.996: INFO: Pod "pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056002051s Feb 1 11:15:13.047: INFO: Pod "pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107790414s STEP: Saw pod success Feb 1 11:15:13.048: INFO: Pod "pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:15:13.055: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 1 11:15:13.147: INFO: Waiting for pod pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005 to disappear Feb 1 11:15:13.157: INFO: Pod pod-secrets-18df14e4-44e4-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:15:13.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2s6hd" for this suite. Feb 1 11:15:21.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:15:21.456: INFO: namespace: e2e-tests-secrets-2s6hd, resource: bindings, ignored listing per whitelist Feb 1 11:15:21.472: INFO: namespace e2e-tests-secrets-2s6hd deletion completed in 8.304657926s • [SLOW TEST:16.798 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:15:21.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-22e4eafe-44e4-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume secrets Feb 1 11:15:21.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-n4q89" to be "success or failure" Feb 1 11:15:21.748: INFO: Pod "pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.84821ms Feb 1 11:15:23.762: INFO: Pod "pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043972703s Feb 1 11:15:26.303: INFO: Pod "pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585383398s Feb 1 11:15:28.314: INFO: Pod "pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596701398s Feb 1 11:15:30.325: INFO: Pod "pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.607589113s STEP: Saw pod success Feb 1 11:15:30.325: INFO: Pod "pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:15:30.329: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 1 11:15:30.946: INFO: Waiting for pod pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005 to disappear Feb 1 11:15:31.163: INFO: Pod pod-projected-secrets-22e6b345-44e4-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:15:31.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n4q89" for this suite. Feb 1 11:15:37.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:15:37.276: INFO: namespace: e2e-tests-projected-n4q89, resource: bindings, ignored listing per whitelist Feb 1 11:15:37.367: INFO: namespace e2e-tests-projected-n4q89 deletion completed in 6.197806371s • [SLOW TEST:15.895 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:15:37.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-szz58 I0201 11:15:37.775276 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-szz58, replica count: 1 I0201 11:15:38.826222 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:39.826509 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:40.826753 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:41.827030 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:42.827506 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:43.827844 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:44.828304 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:45.828587 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0201 11:15:46.828827 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 1 11:15:47.015: INFO: Created: latency-svc-jw5r8 Feb 1 11:15:47.086: INFO: Got endpoints: latency-svc-jw5r8 [157.569989ms] Feb 1 11:15:47.169: INFO: Created: latency-svc-prt5v Feb 1 11:15:47.317: INFO: Got endpoints: latency-svc-prt5v [229.330932ms] Feb 1 11:15:47.367: INFO: Created: latency-svc-pgszq Feb 1 11:15:47.515: INFO: Got endpoints: latency-svc-pgszq [426.837169ms] Feb 1 11:15:47.541: INFO: Created: latency-svc-v2tff Feb 1 11:15:47.554: INFO: Got endpoints: latency-svc-v2tff [465.723854ms] Feb 1 11:15:47.750: INFO: Created: latency-svc-r4wdx Feb 1 11:15:47.780: INFO: Got endpoints: latency-svc-r4wdx [692.573903ms] Feb 1 11:15:47.973: INFO: Created: latency-svc-xphzt Feb 1 11:15:47.981: INFO: Got endpoints: latency-svc-xphzt [893.234774ms] Feb 1 11:15:48.038: INFO: Created: latency-svc-gcflx Feb 1 11:15:48.227: INFO: Got endpoints: latency-svc-gcflx [1.139774004s] Feb 1 11:15:48.238: INFO: Created: latency-svc-qmm5l Feb 1 11:15:48.257: INFO: Got endpoints: latency-svc-qmm5l [1.169707703s] Feb 1 11:15:48.323: INFO: Created: latency-svc-8wc2p Feb 1 11:15:48.478: INFO: Got endpoints: latency-svc-8wc2p [1.390320715s] Feb 1 11:15:48.661: INFO: Created: latency-svc-smm6n Feb 1 11:15:48.677: INFO: Got endpoints: latency-svc-smm6n [1.590131883s] Feb 1 11:15:48.735: INFO: Created: latency-svc-cwm4w Feb 1 11:15:48.738: INFO: Got endpoints: latency-svc-cwm4w [1.650506822s] Feb 1 11:15:48.919: INFO: Created: latency-svc-7dljw Feb 1 11:15:48.937: INFO: Got endpoints: latency-svc-7dljw [1.849022599s] Feb 1 11:15:49.163: INFO: Created: latency-svc-zcwdm Feb 1 11:15:49.198: INFO: Got endpoints: latency-svc-zcwdm [2.109340349s] Feb 1 11:15:49.247: INFO: Created: latency-svc-5rps8 Feb 1 11:15:49.322: INFO: Got endpoints: latency-svc-5rps8 [2.234000669s] Feb 1 11:15:49.424: INFO: Created: latency-svc-7lml4 Feb 1 11:15:49.424: INFO: Got endpoints: latency-svc-7lml4 [2.336483134s] Feb 1 11:15:49.775: INFO: Created: latency-svc-h28c4 Feb 1 11:15:49.920: INFO: Got endpoints: latency-svc-h28c4 [2.832635875s] Feb 1 11:15:49.971: INFO: Created: latency-svc-xnqhs Feb 1 11:15:50.100: INFO: Got endpoints: latency-svc-xnqhs [2.782341855s] Feb 1 11:15:50.154: INFO: Created: latency-svc-v4hzr Feb 1 11:15:50.269: INFO: Got endpoints: latency-svc-v4hzr [2.753847867s] Feb 1 11:15:50.292: INFO: Created: latency-svc-9ks2j Feb 1 11:15:50.302: INFO: Got endpoints: latency-svc-9ks2j [2.747991615s] Feb 1 11:15:50.355: INFO: Created: latency-svc-qsptp Feb 1 11:15:50.508: INFO: Got endpoints: latency-svc-qsptp [2.727974817s] Feb 1 11:15:50.546: INFO: Created: latency-svc-n4wwx Feb 1 11:15:50.753: INFO: Got endpoints: latency-svc-n4wwx [2.771682498s] Feb 1 11:15:50.963: INFO: Created: latency-svc-ddwj8 Feb 1 11:15:51.012: INFO: Got endpoints: latency-svc-ddwj8 [2.78557828s] Feb 1 11:15:51.023: INFO: Created: latency-svc-4n2pd Feb 1 11:15:51.028: INFO: Got endpoints: latency-svc-4n2pd [2.770416536s] Feb 1 11:15:51.183: INFO: Created: latency-svc-bvbgh Feb 1 11:15:51.191: INFO: Got endpoints: latency-svc-bvbgh [2.712678044s] Feb 1 11:15:51.379: INFO: Created: latency-svc-6lg68 Feb 1 11:15:51.393: INFO: Got endpoints: latency-svc-6lg68 [2.716224028s] Feb 1 11:15:51.472: INFO: Created: latency-svc-8nvht Feb 1 11:15:51.583: INFO: Got endpoints: latency-svc-8nvht [2.844565895s] Feb 1 11:15:51.607: INFO: Created: latency-svc-n2j5x Feb 1 11:15:51.625: INFO: Got endpoints: latency-svc-n2j5x [2.688229622s] Feb 1 11:15:51.663: INFO: Created: latency-svc-bzpmj Feb 1 11:15:51.760: INFO: Got endpoints: latency-svc-bzpmj [2.561825191s] Feb 1 11:15:51.804: INFO: Created: latency-svc-zr8d7 Feb 1 11:15:52.091: INFO: Got endpoints: latency-svc-zr8d7 [2.769110922s] Feb 1 11:15:52.145: INFO: Created: latency-svc-8fpt8 Feb 1 11:15:52.328: INFO: Got endpoints: latency-svc-8fpt8 [2.9042173s] Feb 1 11:15:52.345: INFO: Created: latency-svc-5rkvk Feb 1 11:15:52.397: INFO: Got endpoints: latency-svc-5rkvk [2.475883903s] Feb 1 11:15:52.602: INFO: Created: latency-svc-n99cd Feb 1 11:15:52.603: INFO: Got endpoints: latency-svc-n99cd [2.502995273s] Feb 1 11:15:52.665: INFO: Created: latency-svc-c7tml Feb 1 11:15:52.677: INFO: Got endpoints: latency-svc-c7tml [2.407836433s] Feb 1 11:15:52.790: INFO: Created: latency-svc-vwlck Feb 1 11:15:52.798: INFO: Got endpoints: latency-svc-vwlck [2.496596501s] Feb 1 11:15:52.855: INFO: Created: latency-svc-ngtrh Feb 1 11:15:53.007: INFO: Got endpoints: latency-svc-ngtrh [2.497951542s] Feb 1 11:15:53.022: INFO: Created: latency-svc-cvnrb Feb 1 11:15:53.045: INFO: Got endpoints: latency-svc-cvnrb [2.292074973s] Feb 1 11:15:53.110: INFO: Created: latency-svc-dtv6l Feb 1 11:15:53.305: INFO: Got endpoints: latency-svc-dtv6l [2.292119308s] Feb 1 11:15:53.328: INFO: Created: latency-svc-7kglg Feb 1 11:15:53.351: INFO: Got endpoints: latency-svc-7kglg [2.323108166s] Feb 1 11:15:53.414: INFO: Created: latency-svc-bb86q Feb 1 11:15:53.561: INFO: Got endpoints: latency-svc-bb86q [2.369785168s] Feb 1 11:15:53.609: INFO: Created: latency-svc-x58hh Feb 1 11:15:53.645: INFO: Got endpoints: latency-svc-x58hh [2.251374029s] Feb 1 11:15:53.779: INFO: Created: latency-svc-7pwxp Feb 1 11:15:54.108: INFO: Got endpoints: latency-svc-7pwxp [2.525037935s] Feb 1 11:15:54.122: INFO: Created: latency-svc-9sn4p Feb 1 11:15:54.145: INFO: Got endpoints: latency-svc-9sn4p [2.519661648s] Feb 1 11:15:54.379: INFO: Created: latency-svc-5h5b5 Feb 1 11:15:54.379: INFO: Got endpoints: latency-svc-5h5b5 [2.619402797s] Feb 1 11:15:54.562: INFO: Created: latency-svc-wnx6d Feb 1 11:15:54.576: INFO: Got endpoints: latency-svc-wnx6d [2.484633906s] Feb 1 11:15:54.825: INFO: Created: latency-svc-xsg7b Feb 1 11:15:54.835: INFO: Got endpoints: latency-svc-xsg7b [2.506425056s] Feb 1 11:15:54.981: INFO: Created: latency-svc-wn8cm Feb 1 11:15:54.999: INFO: Got endpoints: latency-svc-wn8cm [2.601830293s] Feb 1 11:15:55.052: INFO: Created: latency-svc-m799k Feb 1 11:15:55.064: INFO: Got endpoints: latency-svc-m799k [2.460567453s] Feb 1 11:15:55.190: INFO: Created: latency-svc-r79pf Feb 1 11:15:55.210: INFO: Got endpoints: latency-svc-r79pf [2.532604131s] Feb 1 11:15:55.265: INFO: Created: latency-svc-n6wrl Feb 1 11:15:55.265: INFO: Got endpoints: latency-svc-n6wrl [2.466867987s] Feb 1 11:15:55.413: INFO: Created: latency-svc-l6vwn Feb 1 11:15:55.440: INFO: Got endpoints: latency-svc-l6vwn [2.432979888s] Feb 1 11:15:55.646: INFO: Created: latency-svc-fdgvt Feb 1 11:15:55.680: INFO: Got endpoints: latency-svc-fdgvt [2.634852332s] Feb 1 11:15:55.897: INFO: Created: latency-svc-lm9lb Feb 1 11:15:55.909: INFO: Got endpoints: latency-svc-lm9lb [2.603831722s] Feb 1 11:15:56.137: INFO: Created: latency-svc-js42k Feb 1 11:15:56.152: INFO: Got endpoints: latency-svc-js42k [2.800543069s] Feb 1 11:15:56.372: INFO: Created: latency-svc-hmvph Feb 1 11:15:56.395: INFO: Created: latency-svc-42fjp Feb 1 11:15:56.400: INFO: Got endpoints: latency-svc-hmvph [2.839590873s] Feb 1 11:15:56.436: INFO: Got endpoints: latency-svc-42fjp [2.790816864s] Feb 1 11:15:56.603: INFO: Created: latency-svc-xch24 Feb 1 11:15:56.622: INFO: Got endpoints: latency-svc-xch24 [2.513533804s] Feb 1 11:15:56.800: INFO: Created: latency-svc-ckvf5 Feb 1 11:15:56.842: INFO: Got endpoints: latency-svc-ckvf5 [2.696407945s] Feb 1 11:15:56.974: INFO: Created: latency-svc-895zv Feb 1 11:15:56.993: INFO: Got endpoints: latency-svc-895zv [2.613034198s] Feb 1 11:15:57.042: INFO: Created: latency-svc-86c8q Feb 1 11:15:57.169: INFO: Got endpoints: latency-svc-86c8q [2.59242959s] Feb 1 11:15:57.186: INFO: Created: latency-svc-mv9jg Feb 1 11:15:57.263: INFO: Created: latency-svc-5ptjx Feb 1 11:15:57.269: INFO: Got endpoints: latency-svc-mv9jg [2.434387755s] Feb 1 11:15:57.371: INFO: Got endpoints: latency-svc-5ptjx [2.372163265s] Feb 1 11:15:57.411: INFO: Created: latency-svc-2h9q8 Feb 1 11:15:57.412: INFO: Got endpoints: latency-svc-2h9q8 [2.348416444s] Feb 1 11:15:57.576: INFO: Created: latency-svc-d44zj Feb 1 11:15:57.583: INFO: Got endpoints: latency-svc-d44zj [2.373617983s] Feb 1 11:15:57.662: INFO: Created: latency-svc-7t7jw Feb 1 11:15:57.786: INFO: Got endpoints: latency-svc-7t7jw [2.520724485s] Feb 1 11:15:57.802: INFO: Created: latency-svc-fj5qm Feb 1 11:15:57.815: INFO: Got endpoints: latency-svc-fj5qm [2.374696834s] Feb 1 11:15:58.068: INFO: Created: latency-svc-mbqn7 Feb 1 11:15:58.103: INFO: Got endpoints: latency-svc-mbqn7 [2.422524254s] Feb 1 11:15:58.225: INFO: Created: latency-svc-vwndk Feb 1 11:15:58.240: INFO: Got endpoints: latency-svc-vwndk [2.331100085s] Feb 1 11:15:58.297: INFO: Created: latency-svc-dq2ns Feb 1 11:15:58.527: INFO: Got endpoints: latency-svc-dq2ns [2.375312742s] Feb 1 11:15:58.613: INFO: Created: latency-svc-bgn4x Feb 1 11:15:58.725: INFO: Got endpoints: latency-svc-bgn4x [2.324802148s] Feb 1 11:15:59.199: INFO: Created: latency-svc-7fsgv Feb 1 11:15:59.320: INFO: Got endpoints: latency-svc-7fsgv [2.884023343s] Feb 1 11:15:59.627: INFO: Created: latency-svc-45n78 Feb 1 11:15:59.639: INFO: Got endpoints: latency-svc-45n78 [3.017561635s] Feb 1 11:15:59.930: INFO: Created: latency-svc-tmgzp Feb 1 11:15:59.963: INFO: Got endpoints: latency-svc-tmgzp [3.120898406s] Feb 1 11:16:00.152: INFO: Created: latency-svc-g4h6m Feb 1 11:16:00.180: INFO: Got endpoints: latency-svc-g4h6m [3.187096962s] Feb 1 11:16:00.226: INFO: Created: latency-svc-vgrck Feb 1 11:16:00.341: INFO: Created: latency-svc-qpv59 Feb 1 11:16:00.352: INFO: Got endpoints: latency-svc-vgrck [3.182747486s] Feb 1 11:16:00.371: INFO: Got endpoints: latency-svc-qpv59 [3.101487404s] Feb 1 11:16:00.573: INFO: Created: latency-svc-flsjh Feb 1 11:16:00.589: INFO: Got endpoints: latency-svc-flsjh [3.217646897s] Feb 1 11:16:00.661: INFO: Created: latency-svc-x8kkb Feb 1 11:16:00.662: INFO: Got endpoints: latency-svc-x8kkb [3.24944826s] Feb 1 11:16:00.786: INFO: Created: latency-svc-t2rmj Feb 1 11:16:00.811: INFO: Got endpoints: latency-svc-t2rmj [3.227422056s] Feb 1 11:16:00.894: INFO: Created: latency-svc-686pp Feb 1 11:16:01.085: INFO: Got endpoints: latency-svc-686pp [3.298869199s] Feb 1 11:16:01.096: INFO: Created: latency-svc-ml6rc Feb 1 11:16:01.107: INFO: Got endpoints: latency-svc-ml6rc [3.291782075s] Feb 1 11:16:01.168: INFO: Created: latency-svc-6cjhc Feb 1 11:16:01.376: INFO: Got endpoints: latency-svc-6cjhc [3.272501566s] Feb 1 11:16:01.406: INFO: Created: latency-svc-bdbrc Feb 1 11:16:01.433: INFO: Got endpoints: latency-svc-bdbrc [3.19246516s] Feb 1 11:16:01.648: INFO: Created: latency-svc-ksbml Feb 1 11:16:01.666: INFO: Got endpoints: latency-svc-ksbml [3.138454215s] Feb 1 11:16:01.762: INFO: Created: latency-svc-vfpz6 Feb 1 11:16:01.941: INFO: Got endpoints: latency-svc-vfpz6 [3.215371871s] Feb 1 11:16:01.977: INFO: Created: latency-svc-nfjdv Feb 1 11:16:02.176: INFO: Got endpoints: latency-svc-nfjdv [2.856183956s] Feb 1 11:16:02.195: INFO: Created: latency-svc-wbcsm Feb 1 11:16:02.200: INFO: Got endpoints: latency-svc-wbcsm [2.56025379s] Feb 1 11:16:02.275: INFO: Created: latency-svc-rtgpg Feb 1 11:16:02.399: INFO: Got endpoints: latency-svc-rtgpg [2.436092525s] Feb 1 11:16:02.433: INFO: Created: latency-svc-7jgtn Feb 1 11:16:02.475: INFO: Got endpoints: latency-svc-7jgtn [2.295009142s] Feb 1 11:16:02.597: INFO: Created: latency-svc-p9qnk Feb 1 11:16:02.633: INFO: Got endpoints: latency-svc-p9qnk [2.281231091s] Feb 1 11:16:02.791: INFO: Created: latency-svc-fbq7j Feb 1 11:16:02.799: INFO: Got endpoints: latency-svc-fbq7j [2.428053327s] Feb 1 11:16:02.868: INFO: Created: latency-svc-9fpm2 Feb 1 11:16:03.021: INFO: Got endpoints: latency-svc-9fpm2 [2.431357253s] Feb 1 11:16:03.073: INFO: Created: latency-svc-8mbhq Feb 1 11:16:03.207: INFO: Created: latency-svc-kfw8v Feb 1 11:16:03.215: INFO: Got endpoints: latency-svc-8mbhq [2.552919461s] Feb 1 11:16:03.223: INFO: Got endpoints: latency-svc-kfw8v [2.41233279s] Feb 1 11:16:03.393: INFO: Created: latency-svc-slwg4 Feb 1 11:16:03.405: INFO: Got endpoints: latency-svc-slwg4 [2.319156748s] Feb 1 11:16:03.501: INFO: Created: latency-svc-wwsns Feb 1 11:16:03.648: INFO: Got endpoints: latency-svc-wwsns [2.541601595s] Feb 1 11:16:03.687: INFO: Created: latency-svc-5dk9p Feb 1 11:16:03.816: INFO: Got endpoints: latency-svc-5dk9p [2.440215888s] Feb 1 11:16:03.818: INFO: Created: latency-svc-sdmlq Feb 1 11:16:03.892: INFO: Got endpoints: latency-svc-sdmlq [2.459364489s] Feb 1 11:16:04.116: INFO: Created: latency-svc-pjhb9 Feb 1 11:16:04.165: INFO: Got endpoints: latency-svc-pjhb9 [2.498491582s] Feb 1 11:16:04.191: INFO: Created: latency-svc-pbkhh Feb 1 11:16:04.386: INFO: Got endpoints: latency-svc-pbkhh [2.445077166s] Feb 1 11:16:04.429: INFO: Created: latency-svc-4nntd Feb 1 11:16:04.634: INFO: Got endpoints: latency-svc-4nntd [2.457448013s] Feb 1 11:16:04.650: INFO: Created: latency-svc-lzn98 Feb 1 11:16:04.709: INFO: Got endpoints: latency-svc-lzn98 [2.509273803s] Feb 1 11:16:04.836: INFO: Created: latency-svc-qxtrt Feb 1 11:16:04.880: INFO: Got endpoints: latency-svc-qxtrt [2.480491782s] Feb 1 11:16:04.885: INFO: Created: latency-svc-5b27m Feb 1 11:16:05.048: INFO: Got endpoints: latency-svc-5b27m [2.572488025s] Feb 1 11:16:05.073: INFO: Created: latency-svc-tcrtb Feb 1 11:16:05.109: INFO: Got endpoints: latency-svc-tcrtb [2.475722043s] Feb 1 11:16:05.238: INFO: Created: latency-svc-7n9pc Feb 1 11:16:05.253: INFO: Got endpoints: latency-svc-7n9pc [2.453672653s] Feb 1 11:16:05.300: INFO: Created: latency-svc-z7kgv Feb 1 11:16:05.444: INFO: Got endpoints: latency-svc-z7kgv [2.422865604s] Feb 1 11:16:05.456: INFO: Created: latency-svc-xx7hz Feb 1 11:16:05.467: INFO: Got endpoints: latency-svc-xx7hz [2.251660382s] Feb 1 11:16:05.529: INFO: Created: latency-svc-cn6wh Feb 1 11:16:05.666: INFO: Got endpoints: latency-svc-cn6wh [2.442963564s] Feb 1 11:16:05.713: INFO: Created: latency-svc-94md6 Feb 1 11:16:05.717: INFO: Got endpoints: latency-svc-94md6 [2.312071638s] Feb 1 11:16:05.778: INFO: Created: latency-svc-lpp7v Feb 1 11:16:05.903: INFO: Got endpoints: latency-svc-lpp7v [2.254044698s] Feb 1 11:16:05.956: INFO: Created: latency-svc-fww7d Feb 1 11:16:05.977: INFO: Got endpoints: latency-svc-fww7d [2.160413313s] Feb 1 11:16:06.126: INFO: Created: latency-svc-77qlx Feb 1 11:16:06.151: INFO: Got endpoints: latency-svc-77qlx [2.258689418s] Feb 1 11:16:06.372: INFO: Created: latency-svc-srqvh Feb 1 11:16:06.422: INFO: Got endpoints: latency-svc-srqvh [2.25667206s] Feb 1 11:16:06.436: INFO: Created: latency-svc-cn8qb Feb 1 11:16:06.617: INFO: Got endpoints: latency-svc-cn8qb [2.23054601s] Feb 1 11:16:06.720: INFO: Created: latency-svc-7rlmk Feb 1 11:16:06.865: INFO: Got endpoints: latency-svc-7rlmk [2.23127371s] Feb 1 11:16:06.877: INFO: Created: latency-svc-4xq2h Feb 1 11:16:06.931: INFO: Got endpoints: latency-svc-4xq2h [2.221906863s] Feb 1 11:16:06.961: INFO: Created: latency-svc-x5756 Feb 1 11:16:07.037: INFO: Got endpoints: latency-svc-x5756 [2.157721406s] Feb 1 11:16:07.121: INFO: Created: latency-svc-nxlf5 Feb 1 11:16:07.123: INFO: Got endpoints: latency-svc-nxlf5 [2.075381405s] Feb 1 11:16:07.323: INFO: Created: latency-svc-kgv87 Feb 1 11:16:07.356: INFO: Created: latency-svc-79gqx Feb 1 11:16:07.546: INFO: Got endpoints: latency-svc-kgv87 [2.436675043s] Feb 1 11:16:07.548: INFO: Got endpoints: latency-svc-79gqx [2.29477261s] Feb 1 11:16:07.557: INFO: Created: latency-svc-7zkrn Feb 1 11:16:07.599: INFO: Got endpoints: latency-svc-7zkrn [2.154908931s] Feb 1 11:16:07.638: INFO: Created: latency-svc-zdplm Feb 1 11:16:07.736: INFO: Got endpoints: latency-svc-zdplm [2.269049711s] Feb 1 11:16:07.809: INFO: Created: latency-svc-nxdln Feb 1 11:16:07.816: INFO: Created: latency-svc-tzlkv Feb 1 11:16:07.918: INFO: Got endpoints: latency-svc-tzlkv [2.200946544s] Feb 1 11:16:07.929: INFO: Got endpoints: latency-svc-nxdln [2.262791283s] Feb 1 11:16:07.967: INFO: Created: latency-svc-2mtkp Feb 1 11:16:07.977: INFO: Got endpoints: latency-svc-2mtkp [2.074075754s] Feb 1 11:16:08.135: INFO: Created: latency-svc-k9pbc Feb 1 11:16:08.210: INFO: Got endpoints: latency-svc-k9pbc [2.233221872s] Feb 1 11:16:08.225: INFO: Created: latency-svc-fm6lf Feb 1 11:16:08.318: INFO: Got endpoints: latency-svc-fm6lf [2.166834787s] Feb 1 11:16:08.381: INFO: Created: latency-svc-7xsxq Feb 1 11:16:08.387: INFO: Got endpoints: latency-svc-7xsxq [1.965567663s] Feb 1 11:16:08.417: INFO: Created: latency-svc-zvcqd Feb 1 11:16:08.595: INFO: Got endpoints: latency-svc-zvcqd [1.977618405s] Feb 1 11:16:08.641: INFO: Created: latency-svc-s7sjk Feb 1 11:16:08.659: INFO: Got endpoints: latency-svc-s7sjk [1.79328056s] Feb 1 11:16:08.790: INFO: Created: latency-svc-rrfrp Feb 1 11:16:08.865: INFO: Got endpoints: latency-svc-rrfrp [1.933904696s] Feb 1 11:16:08.984: INFO: Created: latency-svc-nd8v5 Feb 1 11:16:09.005: INFO: Got endpoints: latency-svc-nd8v5 [1.966916927s] Feb 1 11:16:09.163: INFO: Created: latency-svc-w7m6p Feb 1 11:16:09.165: INFO: Got endpoints: latency-svc-w7m6p [2.041820104s] Feb 1 11:16:09.217: INFO: Created: latency-svc-nb2tw Feb 1 11:16:09.335: INFO: Got endpoints: latency-svc-nb2tw [1.788805243s] Feb 1 11:16:09.386: INFO: Created: latency-svc-krhfl Feb 1 11:16:09.391: INFO: Got endpoints: latency-svc-krhfl [1.843475601s] Feb 1 11:16:09.625: INFO: Created: latency-svc-kfd8v Feb 1 11:16:09.647: INFO: Got endpoints: latency-svc-kfd8v [2.047798634s] Feb 1 11:16:09.793: INFO: Created: latency-svc-7kd6p Feb 1 11:16:09.817: INFO: Got endpoints: latency-svc-7kd6p [2.081653485s] Feb 1 11:16:09.893: INFO: Created: latency-svc-5mcnd Feb 1 11:16:09.988: INFO: Got endpoints: latency-svc-5mcnd [2.069675431s] Feb 1 11:16:10.052: INFO: Created: latency-svc-z9x6m Feb 1 11:16:10.065: INFO: Got endpoints: latency-svc-z9x6m [2.135390651s] Feb 1 11:16:10.248: INFO: Created: latency-svc-67zjw Feb 1 11:16:10.263: INFO: Got endpoints: latency-svc-67zjw [2.285193459s] Feb 1 11:16:10.308: INFO: Created: latency-svc-w45rf Feb 1 11:16:10.484: INFO: Got endpoints: latency-svc-w45rf [2.273786412s] Feb 1 11:16:10.523: INFO: Created: latency-svc-z4629 Feb 1 11:16:10.696: INFO: Got endpoints: latency-svc-z4629 [2.377168474s] Feb 1 11:16:10.718: INFO: Created: latency-svc-rc7ff Feb 1 11:16:10.749: INFO: Got endpoints: latency-svc-rc7ff [265.191253ms] Feb 1 11:16:10.919: INFO: Created: latency-svc-plvkl Feb 1 11:16:10.936: INFO: Got endpoints: latency-svc-plvkl [2.548475996s] Feb 1 11:16:11.006: INFO: Created: latency-svc-sj95b Feb 1 11:16:11.127: INFO: Created: latency-svc-8pps4 Feb 1 11:16:11.131: INFO: Got endpoints: latency-svc-sj95b [2.535694511s] Feb 1 11:16:11.136: INFO: Got endpoints: latency-svc-8pps4 [2.477374285s] Feb 1 11:16:11.333: INFO: Created: latency-svc-rkg8j Feb 1 11:16:11.340: INFO: Got endpoints: latency-svc-rkg8j [2.474049982s] Feb 1 11:16:11.536: INFO: Created: latency-svc-5dspq Feb 1 11:16:11.547: INFO: Got endpoints: latency-svc-5dspq [2.541749686s] Feb 1 11:16:11.625: INFO: Created: latency-svc-pxmtw Feb 1 11:16:11.762: INFO: Got endpoints: latency-svc-pxmtw [2.596379263s] Feb 1 11:16:11.786: INFO: Created: latency-svc-2pwqt Feb 1 11:16:11.950: INFO: Got endpoints: latency-svc-2pwqt [2.613975479s] Feb 1 11:16:11.983: INFO: Created: latency-svc-dg8qj Feb 1 11:16:12.022: INFO: Got endpoints: latency-svc-dg8qj [2.630734487s] Feb 1 11:16:12.255: INFO: Created: latency-svc-dmlkg Feb 1 11:16:12.273: INFO: Got endpoints: latency-svc-dmlkg [2.626174317s] Feb 1 11:16:12.386: INFO: Created: latency-svc-c4hrb Feb 1 11:16:12.397: INFO: Got endpoints: latency-svc-c4hrb [2.579697115s] Feb 1 11:16:12.463: INFO: Created: latency-svc-wmsh8 Feb 1 11:16:12.627: INFO: Got endpoints: latency-svc-wmsh8 [2.638140043s] Feb 1 11:16:12.656: INFO: Created: latency-svc-dkw2d Feb 1 11:16:12.672: INFO: Got endpoints: latency-svc-dkw2d [2.607490305s] Feb 1 11:16:12.811: INFO: Created: latency-svc-526sz Feb 1 11:16:12.867: INFO: Got endpoints: latency-svc-526sz [2.60451981s] Feb 1 11:16:12.881: INFO: Created: latency-svc-zrtwb Feb 1 11:16:12.905: INFO: Got endpoints: latency-svc-zrtwb [2.208600142s] Feb 1 11:16:13.053: INFO: Created: latency-svc-grjkg Feb 1 11:16:13.067: INFO: Got endpoints: latency-svc-grjkg [2.317828726s] Feb 1 11:16:13.232: INFO: Created: latency-svc-btb4q Feb 1 11:16:13.250: INFO: Got endpoints: latency-svc-btb4q [2.313868293s] Feb 1 11:16:13.300: INFO: Created: latency-svc-jnq4l Feb 1 11:16:13.305: INFO: Got endpoints: latency-svc-jnq4l [2.174291271s] Feb 1 11:16:13.437: INFO: Created: latency-svc-5h7vz Feb 1 11:16:13.458: INFO: Got endpoints: latency-svc-5h7vz [2.321281073s] Feb 1 11:16:13.652: INFO: Created: latency-svc-b9jhx Feb 1 11:16:13.815: INFO: Created: latency-svc-6hp4f Feb 1 11:16:13.839: INFO: Got endpoints: latency-svc-b9jhx [2.499046852s] Feb 1 11:16:13.846: INFO: Got endpoints: latency-svc-6hp4f [2.298718428s] Feb 1 11:16:14.008: INFO: Created: latency-svc-p287b Feb 1 11:16:14.026: INFO: Got endpoints: latency-svc-p287b [2.264023396s] Feb 1 11:16:14.209: INFO: Created: latency-svc-k5njr Feb 1 11:16:14.249: INFO: Got endpoints: latency-svc-k5njr [2.298977865s] Feb 1 11:16:15.162: INFO: Created: latency-svc-7xklk Feb 1 11:16:15.202: INFO: Got endpoints: latency-svc-7xklk [3.179320264s] Feb 1 11:16:15.361: INFO: Created: latency-svc-hffnc Feb 1 11:16:15.361: INFO: Got endpoints: latency-svc-hffnc [3.086994877s] Feb 1 11:16:15.433: INFO: Created: latency-svc-526bc Feb 1 11:16:15.524: INFO: Got endpoints: latency-svc-526bc [3.126554512s] Feb 1 11:16:15.549: INFO: Created: latency-svc-bp2g4 Feb 1 11:16:15.557: INFO: Got endpoints: latency-svc-bp2g4 [2.929831123s] Feb 1 11:16:15.604: INFO: Created: latency-svc-4hj2l Feb 1 11:16:15.751: INFO: Got endpoints: latency-svc-4hj2l [3.078802344s] Feb 1 11:16:15.764: INFO: Created: latency-svc-kpdx5 Feb 1 11:16:15.792: INFO: Got endpoints: latency-svc-kpdx5 [2.924557809s] Feb 1 11:16:15.921: INFO: Created: latency-svc-z48nj Feb 1 11:16:15.947: INFO: Got endpoints: latency-svc-z48nj [3.042208713s] Feb 1 11:16:16.015: INFO: Created: latency-svc-jn72k Feb 1 11:16:16.316: INFO: Got endpoints: latency-svc-jn72k [3.247952695s] Feb 1 11:16:16.394: INFO: Created: latency-svc-wmf58 Feb 1 11:16:16.662: INFO: Got endpoints: latency-svc-wmf58 [3.411988632s] Feb 1 11:16:16.685: INFO: Created: latency-svc-nbdn6 Feb 1 11:16:16.712: INFO: Got endpoints: latency-svc-nbdn6 [3.407018844s] Feb 1 11:16:17.035: INFO: Created: latency-svc-t745f Feb 1 11:16:17.045: INFO: Got endpoints: latency-svc-t745f [3.586949011s] Feb 1 11:16:17.357: INFO: Created: latency-svc-qcvwc Feb 1 11:16:17.390: INFO: Got endpoints: latency-svc-qcvwc [3.551117904s] Feb 1 11:16:17.451: INFO: Created: latency-svc-qzksm Feb 1 11:16:17.616: INFO: Got endpoints: latency-svc-qzksm [3.770783747s] Feb 1 11:16:17.690: INFO: Created: latency-svc-4f7zq Feb 1 11:16:18.120: INFO: Created: latency-svc-lxwpt Feb 1 11:16:18.171: INFO: Got endpoints: latency-svc-lxwpt [3.921695929s] Feb 1 11:16:18.577: INFO: Got endpoints: latency-svc-4f7zq [4.550857285s] Feb 1 11:16:18.860: INFO: Created: latency-svc-tdtzb Feb 1 11:16:18.874: INFO: Got endpoints: latency-svc-tdtzb [3.672567187s] Feb 1 11:16:19.111: INFO: Created: latency-svc-6l6wl Feb 1 11:16:19.183: INFO: Got endpoints: latency-svc-6l6wl [3.821289711s] Feb 1 11:16:19.384: INFO: Created: latency-svc-j4gw6 Feb 1 11:16:19.401: INFO: Got endpoints: latency-svc-j4gw6 [3.877311323s] Feb 1 11:16:19.642: INFO: Created: latency-svc-xh2gs Feb 1 11:16:19.653: INFO: Got endpoints: latency-svc-xh2gs [4.096713794s] Feb 1 11:16:19.702: INFO: Created: latency-svc-t5wnr Feb 1 11:16:19.811: INFO: Got endpoints: latency-svc-t5wnr [4.05905451s] Feb 1 11:16:19.824: INFO: Created: latency-svc-57n29 Feb 1 11:16:19.833: INFO: Got endpoints: latency-svc-57n29 [4.039921381s] Feb 1 11:16:19.889: INFO: Created: latency-svc-g6djx Feb 1 11:16:19.898: INFO: Got endpoints: latency-svc-g6djx [3.950622268s] Feb 1 11:16:20.045: INFO: Created: latency-svc-7gv6h Feb 1 11:16:20.048: INFO: Got endpoints: latency-svc-7gv6h [3.732373114s] Feb 1 11:16:20.146: INFO: Created: latency-svc-gb5dg Feb 1 11:16:20.298: INFO: Got endpoints: latency-svc-gb5dg [3.636070698s] Feb 1 11:16:20.318: INFO: Created: latency-svc-zb8wc Feb 1 11:16:20.346: INFO: Got endpoints: latency-svc-zb8wc [3.633635186s] Feb 1 11:16:20.565: INFO: Created: latency-svc-v77zh Feb 1 11:16:20.579: INFO: Got endpoints: latency-svc-v77zh [3.533618064s] Feb 1 11:16:20.667: INFO: Created: latency-svc-z5ckv Feb 1 11:16:20.667: INFO: Got endpoints: latency-svc-z5ckv [3.276829511s] Feb 1 11:16:20.832: INFO: Created: latency-svc-gxd2f Feb 1 11:16:20.832: INFO: Got endpoints: latency-svc-gxd2f [3.21540305s] Feb 1 11:16:20.980: INFO: Created: latency-svc-pngsv Feb 1 11:16:21.032: INFO: Got endpoints: latency-svc-pngsv [2.86076692s] Feb 1 11:16:21.039: INFO: Created: latency-svc-5gdkl Feb 1 11:16:21.047: INFO: Got endpoints: latency-svc-5gdkl [2.469638799s] Feb 1 11:16:21.189: INFO: Created: latency-svc-bfv6n Feb 1 11:16:21.228: INFO: Got endpoints: latency-svc-bfv6n [2.353501842s] Feb 1 11:16:21.276: INFO: Created: latency-svc-868rd Feb 1 11:16:21.397: INFO: Got endpoints: latency-svc-868rd [2.214497226s] Feb 1 11:16:21.403: INFO: Created: latency-svc-g5h9j Feb 1 11:16:21.574: INFO: Got endpoints: latency-svc-g5h9j [2.172658447s] Feb 1 11:16:21.585: INFO: Created: latency-svc-hvktg Feb 1 11:16:21.610: INFO: Got endpoints: latency-svc-hvktg [1.956853798s] Feb 1 11:16:21.650: INFO: Created: latency-svc-6pmr9 Feb 1 11:16:21.754: INFO: Got endpoints: latency-svc-6pmr9 [1.943274809s] Feb 1 11:16:21.769: INFO: Created: latency-svc-qsjjm Feb 1 11:16:21.810: INFO: Got endpoints: latency-svc-qsjjm [1.977103364s] Feb 1 11:16:21.810: INFO: Latencies: [229.330932ms 265.191253ms 426.837169ms 465.723854ms 692.573903ms 893.234774ms 1.139774004s 1.169707703s 1.390320715s 1.590131883s 1.650506822s 1.788805243s 1.79328056s 1.843475601s 1.849022599s 1.933904696s 1.943274809s 1.956853798s 1.965567663s 1.966916927s 1.977103364s 1.977618405s 2.041820104s 2.047798634s 2.069675431s 2.074075754s 2.075381405s 2.081653485s 2.109340349s 2.135390651s 2.154908931s 2.157721406s 2.160413313s 2.166834787s 2.172658447s 2.174291271s 2.200946544s 2.208600142s 2.214497226s 2.221906863s 2.23054601s 2.23127371s 2.233221872s 2.234000669s 2.251374029s 2.251660382s 2.254044698s 2.25667206s 2.258689418s 2.262791283s 2.264023396s 2.269049711s 2.273786412s 2.281231091s 2.285193459s 2.292074973s 2.292119308s 2.29477261s 2.295009142s 2.298718428s 2.298977865s 2.312071638s 2.313868293s 2.317828726s 2.319156748s 2.321281073s 2.323108166s 2.324802148s 2.331100085s 2.336483134s 2.348416444s 2.353501842s 2.369785168s 2.372163265s 2.373617983s 2.374696834s 2.375312742s 2.377168474s 2.407836433s 2.41233279s 2.422524254s 2.422865604s 2.428053327s 2.431357253s 2.432979888s 2.434387755s 2.436092525s 2.436675043s 2.440215888s 2.442963564s 2.445077166s 2.453672653s 2.457448013s 2.459364489s 2.460567453s 2.466867987s 2.469638799s 2.474049982s 2.475722043s 2.475883903s 2.477374285s 2.480491782s 2.484633906s 2.496596501s 2.497951542s 2.498491582s 2.499046852s 2.502995273s 2.506425056s 2.509273803s 2.513533804s 2.519661648s 2.520724485s 2.525037935s 2.532604131s 2.535694511s 2.541601595s 2.541749686s 2.548475996s 2.552919461s 2.56025379s 2.561825191s 2.572488025s 2.579697115s 2.59242959s 2.596379263s 2.601830293s 2.603831722s 2.60451981s 2.607490305s 2.613034198s 2.613975479s 2.619402797s 2.626174317s 2.630734487s 2.634852332s 2.638140043s 2.688229622s 2.696407945s 2.712678044s 2.716224028s 2.727974817s 2.747991615s 2.753847867s 2.769110922s 2.770416536s 2.771682498s 2.782341855s 2.78557828s 2.790816864s 2.800543069s 2.832635875s 2.839590873s 2.844565895s 2.856183956s 2.86076692s 2.884023343s 2.9042173s 2.924557809s 2.929831123s 3.017561635s 3.042208713s 3.078802344s 3.086994877s 3.101487404s 3.120898406s 3.126554512s 3.138454215s 3.179320264s 3.182747486s 3.187096962s 3.19246516s 3.215371871s 3.21540305s 3.217646897s 3.227422056s 3.247952695s 3.24944826s 3.272501566s 3.276829511s 3.291782075s 3.298869199s 3.407018844s 3.411988632s 3.533618064s 3.551117904s 3.586949011s 3.633635186s 3.636070698s 3.672567187s 3.732373114s 3.770783747s 3.821289711s 3.877311323s 3.921695929s 3.950622268s 4.039921381s 4.05905451s 4.096713794s 4.550857285s] Feb 1 11:16:21.811: INFO: 50 %ile: 2.477374285s Feb 1 11:16:21.811: INFO: 90 %ile: 3.291782075s Feb 1 11:16:21.811: INFO: 99 %ile: 4.096713794s Feb 1 11:16:21.811: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:16:21.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-szz58" for this suite. Feb 1 11:17:33.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:17:34.202: INFO: namespace: e2e-tests-svc-latency-szz58, resource: bindings, ignored listing per whitelist Feb 1 11:17:34.263: INFO: namespace e2e-tests-svc-latency-szz58 deletion completed in 1m12.432140684s • [SLOW TEST:116.895 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:17:34.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-72126eaf-44e4-11ea-a88d-0242ac110005 STEP: Creating secret with name s-test-opt-upd-72126fb3-44e4-11ea-a88d-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-72126eaf-44e4-11ea-a88d-0242ac110005 STEP: Updating secret s-test-opt-upd-72126fb3-44e4-11ea-a88d-0242ac110005 STEP: Creating secret with name s-test-opt-create-72127030-44e4-11ea-a88d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:18:58.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-t774m" for this suite. Feb 1 11:19:22.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:19:22.784: INFO: namespace: e2e-tests-secrets-t774m, resource: bindings, ignored listing per whitelist Feb 1 11:19:22.789: INFO: namespace e2e-tests-secrets-t774m deletion completed in 24.129938096s • [SLOW TEST:108.525 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:19:22.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5457n [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Feb 1 11:19:22.991: INFO: Found 0 stateful pods, waiting for 3 Feb 1 11:19:33.015: INFO: Found 2 stateful pods, waiting for 3 Feb 1 11:19:43.022: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:19:43.022: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:19:43.022: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 1 11:19:53.009: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:19:53.009: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:19:53.009: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 1 11:19:53.063: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 1 11:20:03.136: INFO: Updating stateful set ss2 Feb 1 11:20:03.171: INFO: Waiting for Pod e2e-tests-statefulset-5457n/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 1 11:20:15.905: INFO: Found 2 stateful pods, waiting for 3 Feb 1 11:20:25.949: INFO: Found 2 stateful pods, waiting for 3 Feb 1 11:20:35.923: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:20:35.923: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:20:35.923: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 1 11:20:45.924: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:20:45.924: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 11:20:45.924: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 1 11:20:45.984: INFO: Updating stateful set ss2 Feb 1 11:20:46.004: INFO: Waiting for Pod e2e-tests-statefulset-5457n/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 11:20:56.050: INFO: Waiting for Pod e2e-tests-statefulset-5457n/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 11:21:06.718: INFO: Updating stateful set ss2 Feb 1 11:21:06.731: INFO: Waiting for StatefulSet e2e-tests-statefulset-5457n/ss2 to complete update Feb 1 11:21:06.731: INFO: Waiting for Pod e2e-tests-statefulset-5457n/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 11:21:16.753: INFO: Waiting for StatefulSet e2e-tests-statefulset-5457n/ss2 to complete update Feb 1 11:21:16.753: INFO: Waiting for Pod e2e-tests-statefulset-5457n/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 11:21:26.815: INFO: Waiting for StatefulSet e2e-tests-statefulset-5457n/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 1 11:21:36.760: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5457n Feb 1 11:21:36.764: INFO: Scaling statefulset ss2 to 0 Feb 1 11:22:16.806: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 11:22:16.815: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:22:16.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5457n" for this suite. Feb 1 11:22:25.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:22:25.222: INFO: namespace: e2e-tests-statefulset-5457n, resource: bindings, ignored listing per whitelist Feb 1 11:22:25.257: INFO: namespace e2e-tests-statefulset-5457n deletion completed in 8.3454728s • [SLOW TEST:182.469 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:22:25.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6kswd STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 1 11:22:25.561: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 1 11:23:05.957: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6kswd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:23:05.957: INFO: >>> kubeConfig: /root/.kube/config I0201 11:23:06.038020 8 log.go:172] (0xc000eb7080) (0xc0020668c0) Create stream I0201 11:23:06.038185 8 log.go:172] (0xc000eb7080) (0xc0020668c0) Stream added, broadcasting: 1 I0201 11:23:06.044307 8 log.go:172] (0xc000eb7080) Reply frame received for 1 I0201 11:23:06.044398 8 log.go:172] (0xc000eb7080) (0xc0010c7d60) Create stream I0201 11:23:06.044418 8 log.go:172] (0xc000eb7080) (0xc0010c7d60) Stream added, broadcasting: 3 I0201 11:23:06.046192 8 log.go:172] (0xc000eb7080) Reply frame received for 3 I0201 11:23:06.046252 8 log.go:172] (0xc000eb7080) (0xc0006768c0) Create stream I0201 11:23:06.046275 8 log.go:172] (0xc000eb7080) (0xc0006768c0) Stream added, broadcasting: 5 I0201 11:23:06.047752 8 log.go:172] (0xc000eb7080) Reply frame received for 5 I0201 11:23:06.261824 8 log.go:172] (0xc000eb7080) Data frame received for 3 I0201 11:23:06.261893 8 log.go:172] (0xc0010c7d60) (3) Data frame handling I0201 11:23:06.261932 8 log.go:172] (0xc0010c7d60) (3) Data frame sent I0201 11:23:06.434394 8 log.go:172] (0xc000eb7080) Data frame received for 1 I0201 11:23:06.434507 8 log.go:172] (0xc0020668c0) (1) Data frame handling I0201 11:23:06.434571 8 log.go:172] (0xc0020668c0) (1) Data frame sent I0201 11:23:06.434624 8 log.go:172] (0xc000eb7080) (0xc0020668c0) Stream removed, broadcasting: 1 I0201 11:23:06.434961 8 log.go:172] (0xc000eb7080) (0xc0010c7d60) Stream removed, broadcasting: 3 I0201 11:23:06.435165 8 log.go:172] (0xc000eb7080) (0xc0006768c0) Stream removed, broadcasting: 5 I0201 11:23:06.435250 8 log.go:172] (0xc000eb7080) (0xc0020668c0) Stream removed, broadcasting: 1 I0201 11:23:06.435269 8 log.go:172] (0xc000eb7080) (0xc0010c7d60) Stream removed, broadcasting: 3 I0201 11:23:06.435291 8 log.go:172] (0xc000eb7080) (0xc0006768c0) Stream removed, broadcasting: 5 Feb 1 11:23:06.435: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:23:06.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0201 11:23:06.436704 8 log.go:172] (0xc000eb7080) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-6kswd" for this suite. Feb 1 11:23:30.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:23:30.797: INFO: namespace: e2e-tests-pod-network-test-6kswd, resource: bindings, ignored listing per whitelist Feb 1 11:23:30.973: INFO: namespace e2e-tests-pod-network-test-6kswd deletion completed in 24.516323896s • [SLOW TEST:65.716 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:23:30.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-46a7aad1-44e5-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume secrets Feb 1 11:23:31.199: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-xvl7z" to be "success or failure" Feb 1 11:23:31.209: INFO: Pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.662669ms Feb 1 11:23:33.257: INFO: Pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057653006s Feb 1 11:23:35.268: INFO: Pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069529462s Feb 1 11:23:37.378: INFO: Pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179130342s Feb 1 11:23:39.406: INFO: Pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207020991s Feb 1 11:23:41.417: INFO: Pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218616963s STEP: Saw pod success Feb 1 11:23:41.418: INFO: Pod "pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:23:41.420: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 1 11:23:42.117: INFO: Waiting for pod pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005 to disappear Feb 1 11:23:42.129: INFO: Pod pod-projected-secrets-46a8a0b0-44e5-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:23:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xvl7z" for this suite. Feb 1 11:23:48.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:23:48.319: INFO: namespace: e2e-tests-projected-xvl7z, resource: bindings, ignored listing per whitelist Feb 1 11:23:48.410: INFO: namespace e2e-tests-projected-xvl7z deletion completed in 6.265114353s • [SLOW TEST:17.437 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:23:48.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 1 11:23:48.652: INFO: Waiting up to 5m0s for pod "downward-api-510fda99-44e5-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-7px4b" to be "success or failure" Feb 1 11:23:48.666: INFO: Pod "downward-api-510fda99-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.35127ms Feb 1 11:23:50.689: INFO: Pod "downward-api-510fda99-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036748857s Feb 1 11:23:52.727: INFO: Pod "downward-api-510fda99-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074983431s Feb 1 11:23:54.740: INFO: Pod "downward-api-510fda99-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08769019s Feb 1 11:23:56.768: INFO: Pod "downward-api-510fda99-44e5-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116138381s STEP: Saw pod success Feb 1 11:23:56.768: INFO: Pod "downward-api-510fda99-44e5-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:23:56.779: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-510fda99-44e5-11ea-a88d-0242ac110005 container dapi-container: STEP: delete the pod Feb 1 11:23:57.005: INFO: Waiting for pod downward-api-510fda99-44e5-11ea-a88d-0242ac110005 to disappear Feb 1 11:23:57.014: INFO: Pod downward-api-510fda99-44e5-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:23:57.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7px4b" for this suite. Feb 1 11:24:03.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:24:03.240: INFO: namespace: e2e-tests-downward-api-7px4b, resource: bindings, ignored listing per whitelist Feb 1 11:24:03.251: INFO: namespace e2e-tests-downward-api-7px4b deletion completed in 6.228795006s • [SLOW TEST:14.840 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:24:03.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-ptws4/secret-test-59df4a8a-44e5-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume secrets Feb 1 11:24:03.447: INFO: Waiting up to 5m0s for pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-ptws4" to be "success or failure" Feb 1 11:24:03.477: INFO: Pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.779749ms Feb 1 11:24:05.498: INFO: Pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050800282s Feb 1 11:24:07.517: INFO: Pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069514783s Feb 1 11:24:09.743: INFO: Pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296004374s Feb 1 11:24:11.759: INFO: Pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.311984458s Feb 1 11:24:13.812: INFO: Pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.364800685s STEP: Saw pod success Feb 1 11:24:13.812: INFO: Pod "pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:24:13.830: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005 container env-test: STEP: delete the pod Feb 1 11:24:14.038: INFO: Waiting for pod pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005 to disappear Feb 1 11:24:14.063: INFO: Pod pod-configmaps-59e02373-44e5-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:24:14.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ptws4" for this suite. Feb 1 11:24:20.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:24:20.449: INFO: namespace: e2e-tests-secrets-ptws4, resource: bindings, ignored listing per whitelist Feb 1 11:24:20.482: INFO: namespace e2e-tests-secrets-ptws4 deletion completed in 6.371620693s • [SLOW TEST:17.230 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:24:20.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2pvrc [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-2pvrc STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-2pvrc STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-2pvrc STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-2pvrc Feb 1 11:24:34.948: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2pvrc, name: ss-0, uid: 6bcd24b0-44e5-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Feb 1 11:24:34.986: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2pvrc, name: ss-0, uid: 6bcd24b0-44e5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 1 11:24:35.073: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2pvrc, name: ss-0, uid: 6bcd24b0-44e5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 1 11:24:35.101: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-2pvrc STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-2pvrc STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-2pvrc and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 1 11:24:48.215: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2pvrc Feb 1 11:24:48.223: INFO: Scaling statefulset ss to 0 Feb 1 11:25:08.331: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 11:25:08.346: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:25:08.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2pvrc" for this suite. Feb 1 11:25:16.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:25:16.839: INFO: namespace: e2e-tests-statefulset-2pvrc, resource: bindings, ignored listing per whitelist Feb 1 11:25:16.887: INFO: namespace e2e-tests-statefulset-2pvrc deletion completed in 8.486916668s • [SLOW TEST:56.405 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:25:16.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-85c46910-44e5-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 11:25:17.102: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-jbntx" to be "success or failure" Feb 1 11:25:17.212: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 110.100578ms Feb 1 11:25:19.413: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311363369s Feb 1 11:25:21.422: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320616076s Feb 1 11:25:23.890: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.788015347s Feb 1 11:25:25.979: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.876945067s Feb 1 11:25:27.992: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.890039725s Feb 1 11:25:30.007: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.905847292s STEP: Saw pod success Feb 1 11:25:30.008: INFO: Pod "pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:25:30.013: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 1 11:25:30.350: INFO: Waiting for pod pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005 to disappear Feb 1 11:25:30.608: INFO: Pod pod-projected-configmaps-85c5aa1a-44e5-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:25:30.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jbntx" for this suite. Feb 1 11:25:38.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:25:38.736: INFO: namespace: e2e-tests-projected-jbntx, resource: bindings, ignored listing per whitelist Feb 1 11:25:38.910: INFO: namespace e2e-tests-projected-jbntx deletion completed in 8.292937455s • [SLOW TEST:22.022 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:25:38.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 1 11:25:59.520: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:25:59.609: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:01.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:01.619: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:03.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:03.629: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:05.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:05.625: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:07.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:07.632: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:09.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:09.629: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:11.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:11.632: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:13.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:13.638: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:15.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:15.623: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:17.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:17.627: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:19.609: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:19.614: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:21.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:21.628: INFO: Pod pod-with-prestop-exec-hook still exists Feb 1 11:26:23.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 1 11:26:23.632: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:26:23.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zk7gn" for this suite. Feb 1 11:26:47.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:26:47.918: INFO: namespace: e2e-tests-container-lifecycle-hook-zk7gn, resource: bindings, ignored listing per whitelist Feb 1 11:26:47.926: INFO: namespace e2e-tests-container-lifecycle-hook-zk7gn deletion completed in 24.240472311s • [SLOW TEST:69.016 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:26:47.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 1 11:26:48.301: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:27:04.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-lz4bs" for this suite. Feb 1 11:27:10.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:27:10.591: INFO: namespace: e2e-tests-init-container-lz4bs, resource: bindings, ignored listing per whitelist Feb 1 11:27:10.672: INFO: namespace e2e-tests-init-container-lz4bs deletion completed in 6.289901946s • [SLOW TEST:22.744 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:27:10.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 1 11:27:10.865: INFO: Waiting up to 5m0s for pod "pod-c996d591-44e5-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-gczdt" to be "success or failure" Feb 1 11:27:10.878: INFO: Pod "pod-c996d591-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.131296ms Feb 1 11:27:12.889: INFO: Pod "pod-c996d591-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024202647s Feb 1 11:27:15.769: INFO: Pod "pod-c996d591-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.903831847s Feb 1 11:27:17.829: INFO: Pod "pod-c996d591-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.964419359s Feb 1 11:27:19.848: INFO: Pod "pod-c996d591-44e5-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.98358484s STEP: Saw pod success Feb 1 11:27:19.849: INFO: Pod "pod-c996d591-44e5-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:27:19.855: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c996d591-44e5-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 11:27:20.468: INFO: Waiting for pod pod-c996d591-44e5-11ea-a88d-0242ac110005 to disappear Feb 1 11:27:20.480: INFO: Pod pod-c996d591-44e5-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:27:20.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gczdt" for this suite. Feb 1 11:27:26.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:27:26.693: INFO: namespace: e2e-tests-emptydir-gczdt, resource: bindings, ignored listing per whitelist Feb 1 11:27:26.762: INFO: namespace e2e-tests-emptydir-gczdt deletion completed in 6.263499164s • [SLOW TEST:16.090 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:27:26.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0201 11:27:40.199445 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 1 11:27:40.199: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:27:40.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wjp9z" for this suite. Feb 1 11:28:03.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:28:04.150: INFO: namespace: e2e-tests-gc-wjp9z, resource: bindings, ignored listing per whitelist Feb 1 11:28:04.155: INFO: namespace e2e-tests-gc-wjp9z deletion completed in 23.949040996s • [SLOW TEST:37.393 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:28:04.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 11:28:04.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tqclb' Feb 1 11:28:06.634: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 11:28:06.634: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Feb 1 11:28:06.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-tqclb' Feb 1 11:28:07.145: INFO: stderr: "" Feb 1 11:28:07.146: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:28:07.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tqclb" for this suite. Feb 1 11:28:29.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:28:29.591: INFO: namespace: e2e-tests-kubectl-tqclb, resource: bindings, ignored listing per whitelist Feb 1 11:28:29.600: INFO: namespace e2e-tests-kubectl-tqclb deletion completed in 22.427928614s • [SLOW TEST:25.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:28:29.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f8af1b6a-44e5-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 11:28:29.997: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-smcv7" to be "success or failure" Feb 1 11:28:30.068: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.86414ms Feb 1 11:28:32.122: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123998125s Feb 1 11:28:34.144: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146601254s Feb 1 11:28:36.225: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227721861s Feb 1 11:28:38.242: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244853434s Feb 1 11:28:40.264: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266663513s Feb 1 11:28:42.310: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.312857375s STEP: Saw pod success Feb 1 11:28:42.311: INFO: Pod "pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:28:42.318: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 1 11:28:42.498: INFO: Waiting for pod pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005 to disappear Feb 1 11:28:42.508: INFO: Pod pod-configmaps-f8c064e4-44e5-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:28:42.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-smcv7" for this suite. Feb 1 11:28:48.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:28:48.637: INFO: namespace: e2e-tests-configmap-smcv7, resource: bindings, ignored listing per whitelist Feb 1 11:28:48.736: INFO: namespace e2e-tests-configmap-smcv7 deletion completed in 6.220459602s • [SLOW TEST:19.135 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:28:48.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:29:00.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-lx4pr" for this suite. Feb 1 11:29:24.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:29:24.338: INFO: namespace: e2e-tests-replication-controller-lx4pr, resource: bindings, ignored listing per whitelist Feb 1 11:29:24.487: INFO: namespace e2e-tests-replication-controller-lx4pr deletion completed in 24.226134876s • [SLOW TEST:35.750 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:29:24.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 1 11:29:24.759: INFO: Waiting up to 5m0s for pod "pod-1965acc3-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-9gbrp" to be "success or failure" Feb 1 11:29:24.839: INFO: Pod "pod-1965acc3-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 80.197508ms Feb 1 11:29:26.852: INFO: Pod "pod-1965acc3-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093311892s Feb 1 11:29:29.547: INFO: Pod "pod-1965acc3-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788539274s Feb 1 11:29:31.576: INFO: Pod "pod-1965acc3-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817140081s Feb 1 11:29:33.594: INFO: Pod "pod-1965acc3-44e6-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.835694036s STEP: Saw pod success Feb 1 11:29:33.594: INFO: Pod "pod-1965acc3-44e6-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:29:34.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1965acc3-44e6-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 11:29:34.842: INFO: Waiting for pod pod-1965acc3-44e6-11ea-a88d-0242ac110005 to disappear Feb 1 11:29:34.937: INFO: Pod pod-1965acc3-44e6-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:29:34.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9gbrp" for this suite. Feb 1 11:29:41.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:29:41.073: INFO: namespace: e2e-tests-emptydir-9gbrp, resource: bindings, ignored listing per whitelist Feb 1 11:29:41.130: INFO: namespace e2e-tests-emptydir-9gbrp deletion completed in 6.176323281s • [SLOW TEST:16.643 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:29:41.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-23508aa7-44e6-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 11:29:41.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-vcxpp" to be "success or failure" Feb 1 11:29:41.428: INFO: Pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.140798ms Feb 1 11:29:43.442: INFO: Pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025471214s Feb 1 11:29:45.471: INFO: Pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053942104s Feb 1 11:29:47.494: INFO: Pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077299902s Feb 1 11:29:49.535: INFO: Pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11786497s Feb 1 11:29:51.553: INFO: Pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135682311s STEP: Saw pod success Feb 1 11:29:51.553: INFO: Pod "pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:29:51.561: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 1 11:29:51.677: INFO: Waiting for pod pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005 to disappear Feb 1 11:29:51.698: INFO: Pod pod-projected-configmaps-23525cef-44e6-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:29:51.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vcxpp" for this suite. Feb 1 11:29:57.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:29:58.081: INFO: namespace: e2e-tests-projected-vcxpp, resource: bindings, ignored listing per whitelist Feb 1 11:29:58.119: INFO: namespace e2e-tests-projected-vcxpp deletion completed in 6.35114124s • [SLOW TEST:16.989 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:29:58.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 11:29:58.417: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 1 11:30:03.435: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 1 11:30:09.467: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 1 11:30:09.527: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-g8zbv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g8zbv/deployments/test-cleanup-deployment,UID:340f54a7-44e6-11ea-a994-fa163e34d433,ResourceVersion:20187128,Generation:1,CreationTimestamp:2020-02-01 11:30:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 1 11:30:09.580: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:30:09.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-g8zbv" for this suite. Feb 1 11:30:17.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:30:17.990: INFO: namespace: e2e-tests-deployment-g8zbv, resource: bindings, ignored listing per whitelist Feb 1 11:30:18.028: INFO: namespace e2e-tests-deployment-g8zbv deletion completed in 8.417643868s • [SLOW TEST:19.908 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:30:18.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 1 11:30:29.616: INFO: Successfully updated pod "pod-update-activedeadlineseconds-394c6b92-44e6-11ea-a88d-0242ac110005" Feb 1 11:30:29.616: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-394c6b92-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-pods-d5n94" to be "terminated due to deadline exceeded" Feb 1 11:30:29.634: INFO: Pod "pod-update-activedeadlineseconds-394c6b92-44e6-11ea-a88d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 18.350344ms Feb 1 11:30:31.778: INFO: Pod "pod-update-activedeadlineseconds-394c6b92-44e6-11ea-a88d-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.162057795s Feb 1 11:30:31.778: INFO: Pod "pod-update-activedeadlineseconds-394c6b92-44e6-11ea-a88d-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:30:31.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-d5n94" for this suite. Feb 1 11:30:38.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:30:38.207: INFO: namespace: e2e-tests-pods-d5n94, resource: bindings, ignored listing per whitelist Feb 1 11:30:38.253: INFO: namespace e2e-tests-pods-d5n94 deletion completed in 6.463270331s • [SLOW TEST:20.224 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:30:38.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 1 11:30:38.505: INFO: Waiting up to 5m0s for pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-bhmg6" to be "success or failure" Feb 1 11:30:38.614: INFO: Pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 108.226739ms Feb 1 11:30:40.626: INFO: Pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120604351s Feb 1 11:30:42.647: INFO: Pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141462027s Feb 1 11:30:44.660: INFO: Pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154782852s Feb 1 11:30:46.673: INFO: Pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168126048s Feb 1 11:30:48.687: INFO: Pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182124194s STEP: Saw pod success Feb 1 11:30:48.688: INFO: Pod "downward-api-45571cd6-44e6-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:30:48.691: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-45571cd6-44e6-11ea-a88d-0242ac110005 container dapi-container: STEP: delete the pod Feb 1 11:30:48.947: INFO: Waiting for pod downward-api-45571cd6-44e6-11ea-a88d-0242ac110005 to disappear Feb 1 11:30:48.966: INFO: Pod downward-api-45571cd6-44e6-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:30:48.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bhmg6" for this suite. Feb 1 11:30:55.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:30:55.332: INFO: namespace: e2e-tests-downward-api-bhmg6, resource: bindings, ignored listing per whitelist Feb 1 11:30:55.354: INFO: namespace e2e-tests-downward-api-bhmg6 deletion completed in 6.368667811s • [SLOW TEST:17.101 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:30:55.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 11:30:55.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-dhs9s" to be "success or failure" Feb 1 11:30:55.577: INFO: Pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.295783ms Feb 1 11:30:57.586: INFO: Pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021813331s Feb 1 11:30:59.607: INFO: Pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043212382s Feb 1 11:31:01.613: INFO: Pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049542373s Feb 1 11:31:03.670: INFO: Pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106281865s Feb 1 11:31:05.687: INFO: Pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122849272s STEP: Saw pod success Feb 1 11:31:05.687: INFO: Pod "downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:31:05.694: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 11:31:06.284: INFO: Waiting for pod downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005 to disappear Feb 1 11:31:06.306: INFO: Pod downwardapi-volume-4f834943-44e6-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:31:06.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dhs9s" for this suite. Feb 1 11:31:12.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:31:12.839: INFO: namespace: e2e-tests-projected-dhs9s, resource: bindings, ignored listing per whitelist Feb 1 11:31:12.933: INFO: namespace e2e-tests-projected-dhs9s deletion completed in 6.435540332s • [SLOW TEST:17.578 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:31:12.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 1 11:31:13.193: INFO: Waiting up to 5m0s for pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-containers-f9f8r" to be "success or failure" Feb 1 11:31:13.207: INFO: Pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.596472ms Feb 1 11:31:15.333: INFO: Pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139179429s Feb 1 11:31:17.343: INFO: Pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149427046s Feb 1 11:31:19.555: INFO: Pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361224489s Feb 1 11:31:21.569: INFO: Pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.37573817s Feb 1 11:31:23.584: INFO: Pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.390169329s STEP: Saw pod success Feb 1 11:31:23.584: INFO: Pod "client-containers-5a06081f-44e6-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:31:23.590: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5a06081f-44e6-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 11:31:24.201: INFO: Waiting for pod client-containers-5a06081f-44e6-11ea-a88d-0242ac110005 to disappear Feb 1 11:31:24.483: INFO: Pod client-containers-5a06081f-44e6-11ea-a88d-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:31:24.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-f9f8r" for this suite. Feb 1 11:31:30.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:31:30.690: INFO: namespace: e2e-tests-containers-f9f8r, resource: bindings, ignored listing per whitelist Feb 1 11:31:30.848: INFO: namespace e2e-tests-containers-f9f8r deletion completed in 6.346424684s • [SLOW TEST:17.914 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:31:30.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 11:31:31.229: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"64c3cadf-44e6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000ec43f2), BlockOwnerDeletion:(*bool)(0xc000ec43f3)}} Feb 1 11:31:31.272: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"64bb565a-44e6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001d12eaa), BlockOwnerDeletion:(*bool)(0xc001d12eab)}} Feb 1 11:31:31.346: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"64c220d5-44e6-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001514c02), BlockOwnerDeletion:(*bool)(0xc001514c03)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:31:36.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-smmdr" for this suite. Feb 1 11:31:42.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:31:42.774: INFO: namespace: e2e-tests-gc-smmdr, resource: bindings, ignored listing per whitelist Feb 1 11:31:42.776: INFO: namespace e2e-tests-gc-smmdr deletion completed in 6.3464138s • [SLOW TEST:11.928 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:31:42.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 1 11:31:42.979: INFO: Waiting up to 5m0s for pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-var-expansion-jg895" to be "success or failure" Feb 1 11:31:43.010: INFO: Pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.270451ms Feb 1 11:31:45.181: INFO: Pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201900581s Feb 1 11:31:47.260: INFO: Pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280719002s Feb 1 11:31:49.482: INFO: Pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.502780156s Feb 1 11:31:51.491: INFO: Pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512131132s Feb 1 11:31:53.857: INFO: Pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.878277452s STEP: Saw pod success Feb 1 11:31:53.857: INFO: Pod "var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:31:53.878: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005 container dapi-container: STEP: delete the pod Feb 1 11:31:54.155: INFO: Waiting for pod var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005 to disappear Feb 1 11:31:54.180: INFO: Pod var-expansion-6bc4acf4-44e6-11ea-a88d-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:31:54.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-jg895" for this suite. Feb 1 11:32:00.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:32:00.530: INFO: namespace: e2e-tests-var-expansion-jg895, resource: bindings, ignored listing per whitelist Feb 1 11:32:00.630: INFO: namespace e2e-tests-var-expansion-jg895 deletion completed in 6.419964511s • [SLOW TEST:17.853 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:32:00.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 1 11:32:21.048: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 11:32:21.120: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 11:32:23.120: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 11:32:23.151: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 11:32:25.120: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 11:32:25.138: INFO: Pod pod-with-poststart-http-hook still exists Feb 1 11:32:27.120: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 1 11:32:27.151: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:32:27.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-h4b4d" for this suite. Feb 1 11:32:51.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:32:51.287: INFO: namespace: e2e-tests-container-lifecycle-hook-h4b4d, resource: bindings, ignored listing per whitelist Feb 1 11:32:51.391: INFO: namespace e2e-tests-container-lifecycle-hook-h4b4d deletion completed in 24.225276296s • [SLOW TEST:50.760 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:32:51.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 1 11:33:02.124: INFO: Successfully updated pod "labelsupdate94a4aecc-44e6-11ea-a88d-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:33:04.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t247l" for this suite. Feb 1 11:33:28.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:33:28.680: INFO: namespace: e2e-tests-projected-t247l, resource: bindings, ignored listing per whitelist Feb 1 11:33:28.689: INFO: namespace e2e-tests-projected-t247l deletion completed in 24.213223485s • [SLOW TEST:37.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:33:28.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 11:33:28.859: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-hqwmn" to be "success or failure" Feb 1 11:33:28.884: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.440196ms Feb 1 11:33:30.902: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043124387s Feb 1 11:33:32.929: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069943088s Feb 1 11:33:35.165: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305980757s Feb 1 11:33:37.178: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319100328s Feb 1 11:33:39.189: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.329902839s Feb 1 11:33:41.203: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.343940056s STEP: Saw pod success Feb 1 11:33:41.203: INFO: Pod "downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:33:41.206: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 11:33:41.313: INFO: Waiting for pod downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005 to disappear Feb 1 11:33:41.355: INFO: Pod downwardapi-volume-aade7cd6-44e6-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:33:41.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hqwmn" for this suite. Feb 1 11:33:47.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:33:47.577: INFO: namespace: e2e-tests-projected-hqwmn, resource: bindings, ignored listing per whitelist Feb 1 11:33:47.644: INFO: namespace e2e-tests-projected-hqwmn deletion completed in 6.270128405s • [SLOW TEST:18.955 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:33:47.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:33:58.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-qbhwd" for this suite. Feb 1 11:34:04.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:34:04.189: INFO: namespace: e2e-tests-emptydir-wrapper-qbhwd, resource: bindings, ignored listing per whitelist Feb 1 11:34:04.296: INFO: namespace e2e-tests-emptydir-wrapper-qbhwd deletion completed in 6.197266908s • [SLOW TEST:16.651 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:34:04.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:34:11.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-k2b2v" for this suite. Feb 1 11:34:20.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:34:20.142: INFO: namespace: e2e-tests-namespaces-k2b2v, resource: bindings, ignored listing per whitelist Feb 1 11:34:20.196: INFO: namespace e2e-tests-namespaces-k2b2v deletion completed in 8.23543049s STEP: Destroying namespace "e2e-tests-nsdeletetest-2j2mh" for this suite. Feb 1 11:34:20.206: INFO: Namespace e2e-tests-nsdeletetest-2j2mh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-28tb5" for this suite. Feb 1 11:34:26.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:34:26.524: INFO: namespace: e2e-tests-nsdeletetest-28tb5, resource: bindings, ignored listing per whitelist Feb 1 11:34:26.548: INFO: namespace e2e-tests-nsdeletetest-28tb5 deletion completed in 6.342326275s • [SLOW TEST:22.252 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:34:26.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:34:36.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5qwnt" for this suite. Feb 1 11:35:20.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:35:21.176: INFO: namespace: e2e-tests-kubelet-test-5qwnt, resource: bindings, ignored listing per whitelist Feb 1 11:35:21.240: INFO: namespace e2e-tests-kubelet-test-5qwnt deletion completed in 44.305333054s • [SLOW TEST:54.691 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:35:21.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:36:21.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5spxw" for this suite. Feb 1 11:36:45.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:36:45.851: INFO: namespace: e2e-tests-container-probe-5spxw, resource: bindings, ignored listing per whitelist Feb 1 11:36:45.969: INFO: namespace e2e-tests-container-probe-5spxw deletion completed in 24.247728605s • [SLOW TEST:84.730 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:36:45.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 1 11:36:46.337: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s62xh,SelfLink:/api/v1/namespaces/e2e-tests-watch-s62xh/configmaps/e2e-watch-test-label-changed,UID:2091beaa-44e7-11ea-a994-fa163e34d433,ResourceVersion:20187991,Generation:0,CreationTimestamp:2020-02-01 11:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 1 11:36:46.338: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s62xh,SelfLink:/api/v1/namespaces/e2e-tests-watch-s62xh/configmaps/e2e-watch-test-label-changed,UID:2091beaa-44e7-11ea-a994-fa163e34d433,ResourceVersion:20187992,Generation:0,CreationTimestamp:2020-02-01 11:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 1 11:36:46.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s62xh,SelfLink:/api/v1/namespaces/e2e-tests-watch-s62xh/configmaps/e2e-watch-test-label-changed,UID:2091beaa-44e7-11ea-a994-fa163e34d433,ResourceVersion:20187993,Generation:0,CreationTimestamp:2020-02-01 11:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 1 11:36:56.546: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s62xh,SelfLink:/api/v1/namespaces/e2e-tests-watch-s62xh/configmaps/e2e-watch-test-label-changed,UID:2091beaa-44e7-11ea-a994-fa163e34d433,ResourceVersion:20188007,Generation:0,CreationTimestamp:2020-02-01 11:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 1 11:36:56.547: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s62xh,SelfLink:/api/v1/namespaces/e2e-tests-watch-s62xh/configmaps/e2e-watch-test-label-changed,UID:2091beaa-44e7-11ea-a994-fa163e34d433,ResourceVersion:20188008,Generation:0,CreationTimestamp:2020-02-01 11:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 1 11:36:56.548: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-s62xh,SelfLink:/api/v1/namespaces/e2e-tests-watch-s62xh/configmaps/e2e-watch-test-label-changed,UID:2091beaa-44e7-11ea-a994-fa163e34d433,ResourceVersion:20188009,Generation:0,CreationTimestamp:2020-02-01 11:36:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:36:56.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-s62xh" for this suite. Feb 1 11:37:02.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:37:02.941: INFO: namespace: e2e-tests-watch-s62xh, resource: bindings, ignored listing per whitelist Feb 1 11:37:03.053: INFO: namespace e2e-tests-watch-s62xh deletion completed in 6.47429928s • [SLOW TEST:17.083 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:37:03.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 1 11:37:13.881: INFO: Successfully updated pod "labelsupdate2aaaffbf-44e7-11ea-a88d-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:37:15.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6trg9" for this suite. Feb 1 11:37:40.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:37:40.146: INFO: namespace: e2e-tests-downward-api-6trg9, resource: bindings, ignored listing per whitelist Feb 1 11:37:40.202: INFO: namespace e2e-tests-downward-api-6trg9 deletion completed in 24.209062611s • [SLOW TEST:37.149 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:37:40.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 1 11:37:40.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6vskq' Feb 1 11:37:40.596: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 1 11:37:40.596: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 1 11:37:42.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6vskq' Feb 1 11:37:43.019: INFO: stderr: "" Feb 1 11:37:43.019: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:37:43.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6vskq" for this suite. Feb 1 11:37:49.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:37:49.270: INFO: namespace: e2e-tests-kubectl-6vskq, resource: bindings, ignored listing per whitelist Feb 1 11:37:49.360: INFO: namespace e2e-tests-kubectl-6vskq deletion completed in 6.331845296s • [SLOW TEST:9.158 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:37:49.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 1 11:38:07.712: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:07.752: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:09.753: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:09.778: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:11.753: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:11.773: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:13.752: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:13.769: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:15.752: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:15.770: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:17.752: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:17.768: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:19.752: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:19.769: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:21.752: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:21.765: INFO: Pod pod-with-prestop-http-hook still exists Feb 1 11:38:23.752: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 1 11:38:23.773: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:38:23.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ppbfl" for this suite. Feb 1 11:38:46.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:38:46.184: INFO: namespace: e2e-tests-container-lifecycle-hook-ppbfl, resource: bindings, ignored listing per whitelist Feb 1 11:38:46.292: INFO: namespace e2e-tests-container-lifecycle-hook-ppbfl deletion completed in 22.455330848s • [SLOW TEST:56.932 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:38:46.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-6834577e-44e7-11ea-a88d-0242ac110005 STEP: Creating secret with name s-test-opt-upd-68345855-44e7-11ea-a88d-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6834577e-44e7-11ea-a88d-0242ac110005 STEP: Updating secret s-test-opt-upd-68345855-44e7-11ea-a88d-0242ac110005 STEP: Creating secret with name s-test-opt-create-683458ae-44e7-11ea-a88d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:40:08.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vxk6f" for this suite. Feb 1 11:40:32.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:40:32.949: INFO: namespace: e2e-tests-projected-vxk6f, resource: bindings, ignored listing per whitelist Feb 1 11:40:32.996: INFO: namespace e2e-tests-projected-vxk6f deletion completed in 24.257270159s • [SLOW TEST:106.703 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:40:32.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a7d33f31-44e7-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume secrets Feb 1 11:40:33.329: INFO: Waiting up to 5m0s for pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-ktwt6" to be "success or failure" Feb 1 11:40:33.391: INFO: Pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.532749ms Feb 1 11:40:35.426: INFO: Pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096754334s Feb 1 11:40:37.459: INFO: Pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13039589s Feb 1 11:40:39.682: INFO: Pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353310166s Feb 1 11:40:41.707: INFO: Pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.37801056s Feb 1 11:40:43.718: INFO: Pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.389000859s STEP: Saw pod success Feb 1 11:40:43.718: INFO: Pod "pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:40:43.721: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 1 11:40:44.282: INFO: Waiting for pod pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005 to disappear Feb 1 11:40:44.590: INFO: Pod pod-secrets-a7d3dd5b-44e7-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:40:44.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ktwt6" for this suite. Feb 1 11:40:50.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:40:50.955: INFO: namespace: e2e-tests-secrets-ktwt6, resource: bindings, ignored listing per whitelist Feb 1 11:40:50.971: INFO: namespace e2e-tests-secrets-ktwt6 deletion completed in 6.363253436s • [SLOW TEST:17.974 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:40:50.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 11:40:51.200: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:41:01.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bf4z8" for this suite. Feb 1 11:41:55.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:41:55.657: INFO: namespace: e2e-tests-pods-bf4z8, resource: bindings, ignored listing per whitelist Feb 1 11:41:55.764: INFO: namespace e2e-tests-pods-bf4z8 deletion completed in 54.372382424s • [SLOW TEST:64.794 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:41:55.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 11:42:24.071: INFO: Container started at 2020-02-01 11:42:03 +0000 UTC, pod became ready at 2020-02-01 11:42:23 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:42:24.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jsdlt" for this suite. Feb 1 11:42:48.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:42:48.187: INFO: namespace: e2e-tests-container-probe-jsdlt, resource: bindings, ignored listing per whitelist Feb 1 11:42:48.307: INFO: namespace e2e-tests-container-probe-jsdlt deletion completed in 24.221170212s • [SLOW TEST:52.542 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:42:48.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 1 11:42:58.754: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f87c09b1-44e7-11ea-a88d-0242ac110005,GenerateName:,Namespace:e2e-tests-events-6mcmn,SelfLink:/api/v1/namespaces/e2e-tests-events-6mcmn/pods/send-events-f87c09b1-44e7-11ea-a88d-0242ac110005,UID:f87ec4a6-44e7-11ea-a994-fa163e34d433,ResourceVersion:20188672,Generation:0,CreationTimestamp:2020-02-01 11:42:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 523424293,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-86s2g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-86s2g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-86s2g true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018e20a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018e20c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 11:42:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 11:42:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 11:42:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 11:42:48 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-01 11:42:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-01 11:42:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://4517274b89df98e01f214851877304ed67fd12f2fbe30019fdcd7ab1ccde875a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 1 11:43:00.799: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 1 11:43:02.818: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:43:02.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-6mcmn" for this suite. Feb 1 11:43:43.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:43:43.143: INFO: namespace: e2e-tests-events-6mcmn, resource: bindings, ignored listing per whitelist Feb 1 11:43:43.180: INFO: namespace e2e-tests-events-6mcmn deletion completed in 40.279160569s • [SLOW TEST:54.872 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:43:43.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 1 11:43:43.421: INFO: Waiting up to 5m0s for pod "downward-api-192f8d35-44e8-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-klkdd" to be "success or failure" Feb 1 11:43:43.442: INFO: Pod "downward-api-192f8d35-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.797829ms Feb 1 11:43:45.455: INFO: Pod "downward-api-192f8d35-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034363989s Feb 1 11:43:47.471: INFO: Pod "downward-api-192f8d35-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049435549s Feb 1 11:43:49.481: INFO: Pod "downward-api-192f8d35-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059941698s Feb 1 11:43:51.500: INFO: Pod "downward-api-192f8d35-44e8-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079220489s STEP: Saw pod success Feb 1 11:43:51.500: INFO: Pod "downward-api-192f8d35-44e8-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:43:51.513: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-192f8d35-44e8-11ea-a88d-0242ac110005 container dapi-container: STEP: delete the pod Feb 1 11:43:51.619: INFO: Waiting for pod downward-api-192f8d35-44e8-11ea-a88d-0242ac110005 to disappear Feb 1 11:43:51.627: INFO: Pod downward-api-192f8d35-44e8-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:43:51.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-klkdd" for this suite. Feb 1 11:43:57.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:43:57.885: INFO: namespace: e2e-tests-downward-api-klkdd, resource: bindings, ignored listing per whitelist Feb 1 11:43:58.089: INFO: namespace e2e-tests-downward-api-klkdd deletion completed in 6.428427045s • [SLOW TEST:14.909 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:43:58.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 1 11:43:58.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:00.168: INFO: stderr: "" Feb 1 11:44:00.168: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 1 11:44:00.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:00.557: INFO: stderr: "" Feb 1 11:44:00.557: INFO: stdout: "update-demo-nautilus-kz454 update-demo-nautilus-vxnrj " Feb 1 11:44:00.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz454 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:00.908: INFO: stderr: "" Feb 1 11:44:00.908: INFO: stdout: "" Feb 1 11:44:00.908: INFO: update-demo-nautilus-kz454 is created but not running Feb 1 11:44:05.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:06.103: INFO: stderr: "" Feb 1 11:44:06.103: INFO: stdout: "update-demo-nautilus-kz454 update-demo-nautilus-vxnrj " Feb 1 11:44:06.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz454 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:07.516: INFO: stderr: "" Feb 1 11:44:07.516: INFO: stdout: "" Feb 1 11:44:07.516: INFO: update-demo-nautilus-kz454 is created but not running Feb 1 11:44:12.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:12.663: INFO: stderr: "" Feb 1 11:44:12.663: INFO: stdout: "update-demo-nautilus-kz454 update-demo-nautilus-vxnrj " Feb 1 11:44:12.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz454 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:12.798: INFO: stderr: "" Feb 1 11:44:12.798: INFO: stdout: "true" Feb 1 11:44:12.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kz454 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:12.953: INFO: stderr: "" Feb 1 11:44:12.953: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 11:44:12.953: INFO: validating pod update-demo-nautilus-kz454 Feb 1 11:44:12.974: INFO: got data: { "image": "nautilus.jpg" } Feb 1 11:44:12.974: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 11:44:12.974: INFO: update-demo-nautilus-kz454 is verified up and running Feb 1 11:44:12.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxnrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:13.082: INFO: stderr: "" Feb 1 11:44:13.082: INFO: stdout: "true" Feb 1 11:44:13.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxnrj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:13.208: INFO: stderr: "" Feb 1 11:44:13.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 1 11:44:13.208: INFO: validating pod update-demo-nautilus-vxnrj Feb 1 11:44:13.217: INFO: got data: { "image": "nautilus.jpg" } Feb 1 11:44:13.217: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 1 11:44:13.217: INFO: update-demo-nautilus-vxnrj is verified up and running STEP: rolling-update to new replication controller Feb 1 11:44:13.220: INFO: scanned /root for discovery docs: Feb 1 11:44:13.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:46.328: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 1 11:44:46.328: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 1 11:44:46.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:46.550: INFO: stderr: "" Feb 1 11:44:46.550: INFO: stdout: "update-demo-kitten-7c6v8 update-demo-kitten-t59mz " Feb 1 11:44:46.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7c6v8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:46.684: INFO: stderr: "" Feb 1 11:44:46.684: INFO: stdout: "true" Feb 1 11:44:46.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7c6v8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:46.797: INFO: stderr: "" Feb 1 11:44:46.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 1 11:44:46.797: INFO: validating pod update-demo-kitten-7c6v8 Feb 1 11:44:46.816: INFO: got data: { "image": "kitten.jpg" } Feb 1 11:44:46.816: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 1 11:44:46.816: INFO: update-demo-kitten-7c6v8 is verified up and running Feb 1 11:44:46.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t59mz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:46.945: INFO: stderr: "" Feb 1 11:44:46.945: INFO: stdout: "true" Feb 1 11:44:46.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t59mz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggkrh' Feb 1 11:44:47.086: INFO: stderr: "" Feb 1 11:44:47.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 1 11:44:47.086: INFO: validating pod update-demo-kitten-t59mz Feb 1 11:44:47.106: INFO: got data: { "image": "kitten.jpg" } Feb 1 11:44:47.106: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 1 11:44:47.106: INFO: update-demo-kitten-t59mz is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:44:47.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ggkrh" for this suite. Feb 1 11:45:13.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:45:13.314: INFO: namespace: e2e-tests-kubectl-ggkrh, resource: bindings, ignored listing per whitelist Feb 1 11:45:13.357: INFO: namespace e2e-tests-kubectl-ggkrh deletion completed in 26.247350897s • [SLOW TEST:75.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:45:13.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 1 11:45:38.028: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:38.028: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:38.121808 8 log.go:172] (0xc000eb7080) (0xc0007399a0) Create stream I0201 11:45:38.121876 8 log.go:172] (0xc000eb7080) (0xc0007399a0) Stream added, broadcasting: 1 I0201 11:45:38.129329 8 log.go:172] (0xc000eb7080) Reply frame received for 1 I0201 11:45:38.129376 8 log.go:172] (0xc000eb7080) (0xc000978e60) Create stream I0201 11:45:38.129390 8 log.go:172] (0xc000eb7080) (0xc000978e60) Stream added, broadcasting: 3 I0201 11:45:38.131467 8 log.go:172] (0xc000eb7080) Reply frame received for 3 I0201 11:45:38.131509 8 log.go:172] (0xc000eb7080) (0xc0020670e0) Create stream I0201 11:45:38.131522 8 log.go:172] (0xc000eb7080) (0xc0020670e0) Stream added, broadcasting: 5 I0201 11:45:38.133156 8 log.go:172] (0xc000eb7080) Reply frame received for 5 I0201 11:45:38.338123 8 log.go:172] (0xc000eb7080) Data frame received for 3 I0201 11:45:38.338186 8 log.go:172] (0xc000978e60) (3) Data frame handling I0201 11:45:38.338220 8 log.go:172] (0xc000978e60) (3) Data frame sent I0201 11:45:38.556048 8 log.go:172] (0xc000eb7080) Data frame received for 1 I0201 11:45:38.556260 8 log.go:172] (0xc000eb7080) (0xc000978e60) Stream removed, broadcasting: 3 I0201 11:45:38.556374 8 log.go:172] (0xc0007399a0) (1) Data frame handling I0201 11:45:38.556486 8 log.go:172] (0xc0007399a0) (1) Data frame sent I0201 11:45:38.556611 8 log.go:172] (0xc000eb7080) (0xc0020670e0) Stream removed, broadcasting: 5 I0201 11:45:38.556700 8 log.go:172] (0xc000eb7080) (0xc0007399a0) Stream removed, broadcasting: 1 I0201 11:45:38.556750 8 log.go:172] (0xc000eb7080) Go away received I0201 11:45:38.557149 8 log.go:172] (0xc000eb7080) (0xc0007399a0) Stream removed, broadcasting: 1 I0201 11:45:38.557189 8 log.go:172] (0xc000eb7080) (0xc000978e60) Stream removed, broadcasting: 3 I0201 11:45:38.557216 8 log.go:172] (0xc000eb7080) (0xc0020670e0) Stream removed, broadcasting: 5 Feb 1 11:45:38.557: INFO: Exec stderr: "" Feb 1 11:45:38.557: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:38.557: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:38.674141 8 log.go:172] (0xc000eb7550) (0xc000739b80) Create stream I0201 11:45:38.674256 8 log.go:172] (0xc000eb7550) (0xc000739b80) Stream added, broadcasting: 1 I0201 11:45:38.682687 8 log.go:172] (0xc000eb7550) Reply frame received for 1 I0201 11:45:38.682761 8 log.go:172] (0xc000eb7550) (0xc000d08c80) Create stream I0201 11:45:38.682782 8 log.go:172] (0xc000eb7550) (0xc000d08c80) Stream added, broadcasting: 3 I0201 11:45:38.684125 8 log.go:172] (0xc000eb7550) Reply frame received for 3 I0201 11:45:38.684252 8 log.go:172] (0xc000eb7550) (0xc000d08dc0) Create stream I0201 11:45:38.684276 8 log.go:172] (0xc000eb7550) (0xc000d08dc0) Stream added, broadcasting: 5 I0201 11:45:38.685493 8 log.go:172] (0xc000eb7550) Reply frame received for 5 I0201 11:45:38.823445 8 log.go:172] (0xc000eb7550) Data frame received for 3 I0201 11:45:38.823619 8 log.go:172] (0xc000d08c80) (3) Data frame handling I0201 11:45:38.823683 8 log.go:172] (0xc000d08c80) (3) Data frame sent I0201 11:45:39.002665 8 log.go:172] (0xc000eb7550) Data frame received for 1 I0201 11:45:39.002964 8 log.go:172] (0xc000eb7550) (0xc000d08c80) Stream removed, broadcasting: 3 I0201 11:45:39.003082 8 log.go:172] (0xc000739b80) (1) Data frame handling I0201 11:45:39.003166 8 log.go:172] (0xc000739b80) (1) Data frame sent I0201 11:45:39.003218 8 log.go:172] (0xc000eb7550) (0xc000d08dc0) Stream removed, broadcasting: 5 I0201 11:45:39.003325 8 log.go:172] (0xc000eb7550) (0xc000739b80) Stream removed, broadcasting: 1 I0201 11:45:39.003361 8 log.go:172] (0xc000eb7550) Go away received I0201 11:45:39.003493 8 log.go:172] (0xc000eb7550) (0xc000739b80) Stream removed, broadcasting: 1 I0201 11:45:39.003513 8 log.go:172] (0xc000eb7550) (0xc000d08c80) Stream removed, broadcasting: 3 I0201 11:45:39.003528 8 log.go:172] (0xc000eb7550) (0xc000d08dc0) Stream removed, broadcasting: 5 Feb 1 11:45:39.003: INFO: Exec stderr: "" Feb 1 11:45:39.003: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:39.003: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:39.097571 8 log.go:172] (0xc0010da2c0) (0xc002046dc0) Create stream I0201 11:45:39.097625 8 log.go:172] (0xc0010da2c0) (0xc002046dc0) Stream added, broadcasting: 1 I0201 11:45:39.101364 8 log.go:172] (0xc0010da2c0) Reply frame received for 1 I0201 11:45:39.101393 8 log.go:172] (0xc0010da2c0) (0xc002089360) Create stream I0201 11:45:39.101401 8 log.go:172] (0xc0010da2c0) (0xc002089360) Stream added, broadcasting: 3 I0201 11:45:39.102448 8 log.go:172] (0xc0010da2c0) Reply frame received for 3 I0201 11:45:39.102469 8 log.go:172] (0xc0010da2c0) (0xc000d08e60) Create stream I0201 11:45:39.102477 8 log.go:172] (0xc0010da2c0) (0xc000d08e60) Stream added, broadcasting: 5 I0201 11:45:39.107000 8 log.go:172] (0xc0010da2c0) Reply frame received for 5 I0201 11:45:39.224743 8 log.go:172] (0xc0010da2c0) Data frame received for 3 I0201 11:45:39.224780 8 log.go:172] (0xc002089360) (3) Data frame handling I0201 11:45:39.224806 8 log.go:172] (0xc002089360) (3) Data frame sent I0201 11:45:39.384278 8 log.go:172] (0xc0010da2c0) Data frame received for 1 I0201 11:45:39.384360 8 log.go:172] (0xc0010da2c0) (0xc002089360) Stream removed, broadcasting: 3 I0201 11:45:39.384478 8 log.go:172] (0xc002046dc0) (1) Data frame handling I0201 11:45:39.384498 8 log.go:172] (0xc0010da2c0) (0xc000d08e60) Stream removed, broadcasting: 5 I0201 11:45:39.384529 8 log.go:172] (0xc002046dc0) (1) Data frame sent I0201 11:45:39.384542 8 log.go:172] (0xc0010da2c0) (0xc002046dc0) Stream removed, broadcasting: 1 I0201 11:45:39.384559 8 log.go:172] (0xc0010da2c0) Go away received I0201 11:45:39.384669 8 log.go:172] (0xc0010da2c0) (0xc002046dc0) Stream removed, broadcasting: 1 I0201 11:45:39.384683 8 log.go:172] (0xc0010da2c0) (0xc002089360) Stream removed, broadcasting: 3 I0201 11:45:39.384692 8 log.go:172] (0xc0010da2c0) (0xc000d08e60) Stream removed, broadcasting: 5 Feb 1 11:45:39.384: INFO: Exec stderr: "" Feb 1 11:45:39.384: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:39.384: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:39.488130 8 log.go:172] (0xc0018402c0) (0xc000b8b400) Create stream I0201 11:45:39.488163 8 log.go:172] (0xc0018402c0) (0xc000b8b400) Stream added, broadcasting: 1 I0201 11:45:39.492887 8 log.go:172] (0xc0018402c0) Reply frame received for 1 I0201 11:45:39.492915 8 log.go:172] (0xc0018402c0) (0xc002089400) Create stream I0201 11:45:39.492923 8 log.go:172] (0xc0018402c0) (0xc002089400) Stream added, broadcasting: 3 I0201 11:45:39.494095 8 log.go:172] (0xc0018402c0) Reply frame received for 3 I0201 11:45:39.494116 8 log.go:172] (0xc0018402c0) (0xc002046f00) Create stream I0201 11:45:39.494123 8 log.go:172] (0xc0018402c0) (0xc002046f00) Stream added, broadcasting: 5 I0201 11:45:39.495454 8 log.go:172] (0xc0018402c0) Reply frame received for 5 I0201 11:45:39.611973 8 log.go:172] (0xc0018402c0) Data frame received for 3 I0201 11:45:39.611993 8 log.go:172] (0xc002089400) (3) Data frame handling I0201 11:45:39.612003 8 log.go:172] (0xc002089400) (3) Data frame sent I0201 11:45:39.747174 8 log.go:172] (0xc0018402c0) Data frame received for 1 I0201 11:45:39.747273 8 log.go:172] (0xc0018402c0) (0xc002089400) Stream removed, broadcasting: 3 I0201 11:45:39.747328 8 log.go:172] (0xc000b8b400) (1) Data frame handling I0201 11:45:39.747353 8 log.go:172] (0xc000b8b400) (1) Data frame sent I0201 11:45:39.747418 8 log.go:172] (0xc0018402c0) (0xc002046f00) Stream removed, broadcasting: 5 I0201 11:45:39.747483 8 log.go:172] (0xc0018402c0) (0xc000b8b400) Stream removed, broadcasting: 1 I0201 11:45:39.747526 8 log.go:172] (0xc0018402c0) Go away received I0201 11:45:39.747636 8 log.go:172] (0xc0018402c0) (0xc000b8b400) Stream removed, broadcasting: 1 I0201 11:45:39.747653 8 log.go:172] (0xc0018402c0) (0xc002089400) Stream removed, broadcasting: 3 I0201 11:45:39.747670 8 log.go:172] (0xc0018402c0) (0xc002046f00) Stream removed, broadcasting: 5 Feb 1 11:45:39.747: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 1 11:45:39.747: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:39.747: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:39.825015 8 log.go:172] (0xc0015d82c0) (0xc002089680) Create stream I0201 11:45:39.825076 8 log.go:172] (0xc0015d82c0) (0xc002089680) Stream added, broadcasting: 1 I0201 11:45:39.832444 8 log.go:172] (0xc0015d82c0) Reply frame received for 1 I0201 11:45:39.832502 8 log.go:172] (0xc0015d82c0) (0xc000739cc0) Create stream I0201 11:45:39.832510 8 log.go:172] (0xc0015d82c0) (0xc000739cc0) Stream added, broadcasting: 3 I0201 11:45:39.834077 8 log.go:172] (0xc0015d82c0) Reply frame received for 3 I0201 11:45:39.834117 8 log.go:172] (0xc0015d82c0) (0xc000739d60) Create stream I0201 11:45:39.834132 8 log.go:172] (0xc0015d82c0) (0xc000739d60) Stream added, broadcasting: 5 I0201 11:45:39.835624 8 log.go:172] (0xc0015d82c0) Reply frame received for 5 I0201 11:45:40.005670 8 log.go:172] (0xc0015d82c0) Data frame received for 3 I0201 11:45:40.005784 8 log.go:172] (0xc000739cc0) (3) Data frame handling I0201 11:45:40.005865 8 log.go:172] (0xc000739cc0) (3) Data frame sent I0201 11:45:40.154842 8 log.go:172] (0xc0015d82c0) (0xc000739cc0) Stream removed, broadcasting: 3 I0201 11:45:40.155005 8 log.go:172] (0xc0015d82c0) Data frame received for 1 I0201 11:45:40.155030 8 log.go:172] (0xc002089680) (1) Data frame handling I0201 11:45:40.155048 8 log.go:172] (0xc002089680) (1) Data frame sent I0201 11:45:40.155097 8 log.go:172] (0xc0015d82c0) (0xc002089680) Stream removed, broadcasting: 1 I0201 11:45:40.155195 8 log.go:172] (0xc0015d82c0) (0xc000739d60) Stream removed, broadcasting: 5 I0201 11:45:40.155264 8 log.go:172] (0xc0015d82c0) (0xc002089680) Stream removed, broadcasting: 1 I0201 11:45:40.155277 8 log.go:172] (0xc0015d82c0) (0xc000739cc0) Stream removed, broadcasting: 3 I0201 11:45:40.155286 8 log.go:172] (0xc0015d82c0) (0xc000739d60) Stream removed, broadcasting: 5 I0201 11:45:40.155594 8 log.go:172] (0xc0015d82c0) Go away received Feb 1 11:45:40.155: INFO: Exec stderr: "" Feb 1 11:45:40.155: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:40.155: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:40.230140 8 log.go:172] (0xc0015d8790) (0xc002089a40) Create stream I0201 11:45:40.230197 8 log.go:172] (0xc0015d8790) (0xc002089a40) Stream added, broadcasting: 1 I0201 11:45:40.237111 8 log.go:172] (0xc0015d8790) Reply frame received for 1 I0201 11:45:40.237169 8 log.go:172] (0xc0015d8790) (0xc002046fa0) Create stream I0201 11:45:40.237180 8 log.go:172] (0xc0015d8790) (0xc002046fa0) Stream added, broadcasting: 3 I0201 11:45:40.238974 8 log.go:172] (0xc0015d8790) Reply frame received for 3 I0201 11:45:40.239115 8 log.go:172] (0xc0015d8790) (0xc000739e00) Create stream I0201 11:45:40.239197 8 log.go:172] (0xc0015d8790) (0xc000739e00) Stream added, broadcasting: 5 I0201 11:45:40.244755 8 log.go:172] (0xc0015d8790) Reply frame received for 5 I0201 11:45:40.396245 8 log.go:172] (0xc0015d8790) Data frame received for 3 I0201 11:45:40.396347 8 log.go:172] (0xc002046fa0) (3) Data frame handling I0201 11:45:40.396375 8 log.go:172] (0xc002046fa0) (3) Data frame sent I0201 11:45:40.590860 8 log.go:172] (0xc0015d8790) (0xc002046fa0) Stream removed, broadcasting: 3 I0201 11:45:40.591067 8 log.go:172] (0xc0015d8790) Data frame received for 1 I0201 11:45:40.591219 8 log.go:172] (0xc0015d8790) (0xc000739e00) Stream removed, broadcasting: 5 I0201 11:45:40.591298 8 log.go:172] (0xc002089a40) (1) Data frame handling I0201 11:45:40.591323 8 log.go:172] (0xc002089a40) (1) Data frame sent I0201 11:45:40.591337 8 log.go:172] (0xc0015d8790) (0xc002089a40) Stream removed, broadcasting: 1 I0201 11:45:40.591353 8 log.go:172] (0xc0015d8790) Go away received I0201 11:45:40.591499 8 log.go:172] (0xc0015d8790) (0xc002089a40) Stream removed, broadcasting: 1 I0201 11:45:40.591509 8 log.go:172] (0xc0015d8790) (0xc002046fa0) Stream removed, broadcasting: 3 I0201 11:45:40.591518 8 log.go:172] (0xc0015d8790) (0xc000739e00) Stream removed, broadcasting: 5 Feb 1 11:45:40.591: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 1 11:45:40.591: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:40.591: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:40.763377 8 log.go:172] (0xc0015d8c60) (0xc002089cc0) Create stream I0201 11:45:40.763472 8 log.go:172] (0xc0015d8c60) (0xc002089cc0) Stream added, broadcasting: 1 I0201 11:45:40.803452 8 log.go:172] (0xc0015d8c60) Reply frame received for 1 I0201 11:45:40.803573 8 log.go:172] (0xc0015d8c60) (0xc0020470e0) Create stream I0201 11:45:40.803600 8 log.go:172] (0xc0015d8c60) (0xc0020470e0) Stream added, broadcasting: 3 I0201 11:45:40.805604 8 log.go:172] (0xc0015d8c60) Reply frame received for 3 I0201 11:45:40.805629 8 log.go:172] (0xc0015d8c60) (0xc000d08fa0) Create stream I0201 11:45:40.805636 8 log.go:172] (0xc0015d8c60) (0xc000d08fa0) Stream added, broadcasting: 5 I0201 11:45:40.809039 8 log.go:172] (0xc0015d8c60) Reply frame received for 5 I0201 11:45:40.938446 8 log.go:172] (0xc0015d8c60) Data frame received for 3 I0201 11:45:40.938491 8 log.go:172] (0xc0020470e0) (3) Data frame handling I0201 11:45:40.938569 8 log.go:172] (0xc0020470e0) (3) Data frame sent I0201 11:45:41.052054 8 log.go:172] (0xc0015d8c60) (0xc0020470e0) Stream removed, broadcasting: 3 I0201 11:45:41.052217 8 log.go:172] (0xc0015d8c60) Data frame received for 1 I0201 11:45:41.052250 8 log.go:172] (0xc002089cc0) (1) Data frame handling I0201 11:45:41.052276 8 log.go:172] (0xc002089cc0) (1) Data frame sent I0201 11:45:41.052304 8 log.go:172] (0xc0015d8c60) (0xc002089cc0) Stream removed, broadcasting: 1 I0201 11:45:41.052425 8 log.go:172] (0xc0015d8c60) (0xc000d08fa0) Stream removed, broadcasting: 5 I0201 11:45:41.052594 8 log.go:172] (0xc0015d8c60) Go away received I0201 11:45:41.052671 8 log.go:172] (0xc0015d8c60) (0xc002089cc0) Stream removed, broadcasting: 1 I0201 11:45:41.052686 8 log.go:172] (0xc0015d8c60) (0xc0020470e0) Stream removed, broadcasting: 3 I0201 11:45:41.052699 8 log.go:172] (0xc0015d8c60) (0xc000d08fa0) Stream removed, broadcasting: 5 Feb 1 11:45:41.052: INFO: Exec stderr: "" Feb 1 11:45:41.052: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:41.052: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:41.127956 8 log.go:172] (0xc000eb7a20) (0xc002710000) Create stream I0201 11:45:41.128053 8 log.go:172] (0xc000eb7a20) (0xc002710000) Stream added, broadcasting: 1 I0201 11:45:41.135909 8 log.go:172] (0xc000eb7a20) Reply frame received for 1 I0201 11:45:41.135938 8 log.go:172] (0xc000eb7a20) (0xc0020472c0) Create stream I0201 11:45:41.135947 8 log.go:172] (0xc000eb7a20) (0xc0020472c0) Stream added, broadcasting: 3 I0201 11:45:41.136793 8 log.go:172] (0xc000eb7a20) Reply frame received for 3 I0201 11:45:41.136811 8 log.go:172] (0xc000eb7a20) (0xc000b8b540) Create stream I0201 11:45:41.136817 8 log.go:172] (0xc000eb7a20) (0xc000b8b540) Stream added, broadcasting: 5 I0201 11:45:41.137675 8 log.go:172] (0xc000eb7a20) Reply frame received for 5 I0201 11:45:41.241503 8 log.go:172] (0xc000eb7a20) Data frame received for 3 I0201 11:45:41.241531 8 log.go:172] (0xc0020472c0) (3) Data frame handling I0201 11:45:41.241548 8 log.go:172] (0xc0020472c0) (3) Data frame sent I0201 11:45:41.377038 8 log.go:172] (0xc000eb7a20) Data frame received for 1 I0201 11:45:41.377077 8 log.go:172] (0xc000eb7a20) (0xc0020472c0) Stream removed, broadcasting: 3 I0201 11:45:41.377143 8 log.go:172] (0xc002710000) (1) Data frame handling I0201 11:45:41.377164 8 log.go:172] (0xc002710000) (1) Data frame sent I0201 11:45:41.377188 8 log.go:172] (0xc000eb7a20) (0xc000b8b540) Stream removed, broadcasting: 5 I0201 11:45:41.377224 8 log.go:172] (0xc000eb7a20) (0xc002710000) Stream removed, broadcasting: 1 I0201 11:45:41.377238 8 log.go:172] (0xc000eb7a20) Go away received I0201 11:45:41.377602 8 log.go:172] (0xc000eb7a20) (0xc002710000) Stream removed, broadcasting: 1 I0201 11:45:41.377611 8 log.go:172] (0xc000eb7a20) (0xc0020472c0) Stream removed, broadcasting: 3 I0201 11:45:41.377617 8 log.go:172] (0xc000eb7a20) (0xc000b8b540) Stream removed, broadcasting: 5 Feb 1 11:45:41.377: INFO: Exec stderr: "" Feb 1 11:45:41.377: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:41.377: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:41.437939 8 log.go:172] (0xc001840790) (0xc000b8bb80) Create stream I0201 11:45:41.437991 8 log.go:172] (0xc001840790) (0xc000b8bb80) Stream added, broadcasting: 1 I0201 11:45:41.444678 8 log.go:172] (0xc001840790) Reply frame received for 1 I0201 11:45:41.444698 8 log.go:172] (0xc001840790) (0xc000b8bcc0) Create stream I0201 11:45:41.444712 8 log.go:172] (0xc001840790) (0xc000b8bcc0) Stream added, broadcasting: 3 I0201 11:45:41.446875 8 log.go:172] (0xc001840790) Reply frame received for 3 I0201 11:45:41.446894 8 log.go:172] (0xc001840790) (0xc002089d60) Create stream I0201 11:45:41.446903 8 log.go:172] (0xc001840790) (0xc002089d60) Stream added, broadcasting: 5 I0201 11:45:41.448122 8 log.go:172] (0xc001840790) Reply frame received for 5 I0201 11:45:41.543757 8 log.go:172] (0xc001840790) Data frame received for 3 I0201 11:45:41.543816 8 log.go:172] (0xc000b8bcc0) (3) Data frame handling I0201 11:45:41.543845 8 log.go:172] (0xc000b8bcc0) (3) Data frame sent I0201 11:45:41.641855 8 log.go:172] (0xc001840790) (0xc000b8bcc0) Stream removed, broadcasting: 3 I0201 11:45:41.641971 8 log.go:172] (0xc001840790) Data frame received for 1 I0201 11:45:41.642009 8 log.go:172] (0xc001840790) (0xc002089d60) Stream removed, broadcasting: 5 I0201 11:45:41.642057 8 log.go:172] (0xc000b8bb80) (1) Data frame handling I0201 11:45:41.642115 8 log.go:172] (0xc000b8bb80) (1) Data frame sent I0201 11:45:41.642130 8 log.go:172] (0xc001840790) (0xc000b8bb80) Stream removed, broadcasting: 1 I0201 11:45:41.642143 8 log.go:172] (0xc001840790) Go away received I0201 11:45:41.642355 8 log.go:172] (0xc001840790) (0xc000b8bb80) Stream removed, broadcasting: 1 I0201 11:45:41.642382 8 log.go:172] (0xc001840790) (0xc000b8bcc0) Stream removed, broadcasting: 3 I0201 11:45:41.642398 8 log.go:172] (0xc001840790) (0xc002089d60) Stream removed, broadcasting: 5 Feb 1 11:45:41.642: INFO: Exec stderr: "" Feb 1 11:45:41.642: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-k8jkh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 1 11:45:41.642: INFO: >>> kubeConfig: /root/.kube/config I0201 11:45:41.717677 8 log.go:172] (0xc000eb7ef0) (0xc002710280) Create stream I0201 11:45:41.717741 8 log.go:172] (0xc000eb7ef0) (0xc002710280) Stream added, broadcasting: 1 I0201 11:45:41.722115 8 log.go:172] (0xc000eb7ef0) Reply frame received for 1 I0201 11:45:41.722156 8 log.go:172] (0xc000eb7ef0) (0xc002089e00) Create stream I0201 11:45:41.722168 8 log.go:172] (0xc000eb7ef0) (0xc002089e00) Stream added, broadcasting: 3 I0201 11:45:41.723188 8 log.go:172] (0xc000eb7ef0) Reply frame received for 3 I0201 11:45:41.723213 8 log.go:172] (0xc000eb7ef0) (0xc002710320) Create stream I0201 11:45:41.723223 8 log.go:172] (0xc000eb7ef0) (0xc002710320) Stream added, broadcasting: 5 I0201 11:45:41.724218 8 log.go:172] (0xc000eb7ef0) Reply frame received for 5 I0201 11:45:41.838266 8 log.go:172] (0xc000eb7ef0) Data frame received for 3 I0201 11:45:41.838307 8 log.go:172] (0xc002089e00) (3) Data frame handling I0201 11:45:41.838329 8 log.go:172] (0xc002089e00) (3) Data frame sent I0201 11:45:41.944993 8 log.go:172] (0xc000eb7ef0) (0xc002089e00) Stream removed, broadcasting: 3 I0201 11:45:41.945080 8 log.go:172] (0xc000eb7ef0) Data frame received for 1 I0201 11:45:41.945104 8 log.go:172] (0xc002710280) (1) Data frame handling I0201 11:45:41.945117 8 log.go:172] (0xc002710280) (1) Data frame sent I0201 11:45:41.945128 8 log.go:172] (0xc000eb7ef0) (0xc002710280) Stream removed, broadcasting: 1 I0201 11:45:41.945152 8 log.go:172] (0xc000eb7ef0) (0xc002710320) Stream removed, broadcasting: 5 I0201 11:45:41.945247 8 log.go:172] (0xc000eb7ef0) Go away received I0201 11:45:41.945295 8 log.go:172] (0xc000eb7ef0) (0xc002710280) Stream removed, broadcasting: 1 I0201 11:45:41.945310 8 log.go:172] (0xc000eb7ef0) (0xc002089e00) Stream removed, broadcasting: 3 I0201 11:45:41.945321 8 log.go:172] (0xc000eb7ef0) (0xc002710320) Stream removed, broadcasting: 5 Feb 1 11:45:41.945: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:45:41.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-k8jkh" for this suite. Feb 1 11:46:38.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:46:38.152: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-k8jkh, resource: bindings, ignored listing per whitelist Feb 1 11:46:38.171: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-k8jkh deletion completed in 56.212784312s • [SLOW TEST:84.813 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:46:38.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 1 11:46:38.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 1 11:46:38.647: INFO: stderr: "" Feb 1 11:46:38.647: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:46:38.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2g2hv" for this suite. Feb 1 11:46:44.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:46:44.850: INFO: namespace: e2e-tests-kubectl-2g2hv, resource: bindings, ignored listing per whitelist Feb 1 11:46:45.033: INFO: namespace e2e-tests-kubectl-2g2hv deletion completed in 6.378543605s • [SLOW TEST:6.862 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:46:45.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 1 11:46:45.249: INFO: Waiting up to 5m0s for pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005" in namespace "e2e-tests-containers-swgb7" to be "success or failure" Feb 1 11:46:45.261: INFO: Pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.488742ms Feb 1 11:46:47.281: INFO: Pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031071578s Feb 1 11:46:49.303: INFO: Pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053589116s Feb 1 11:46:51.319: INFO: Pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069934214s Feb 1 11:46:53.332: INFO: Pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082144651s Feb 1 11:46:55.343: INFO: Pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093266574s STEP: Saw pod success Feb 1 11:46:55.343: INFO: Pod "client-containers-8593320b-44e8-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:46:55.347: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-8593320b-44e8-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 11:46:56.106: INFO: Waiting for pod client-containers-8593320b-44e8-11ea-a88d-0242ac110005 to disappear Feb 1 11:46:56.129: INFO: Pod client-containers-8593320b-44e8-11ea-a88d-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:46:56.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-swgb7" for this suite. Feb 1 11:47:02.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:47:02.461: INFO: namespace: e2e-tests-containers-swgb7, resource: bindings, ignored listing per whitelist Feb 1 11:47:02.644: INFO: namespace e2e-tests-containers-swgb7 deletion completed in 6.505342266s • [SLOW TEST:17.610 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:47:02.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 11:47:02.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-hj544" to be "success or failure" Feb 1 11:47:03.022: INFO: Pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 113.96612ms Feb 1 11:47:05.111: INFO: Pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203656526s Feb 1 11:47:07.278: INFO: Pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369914729s Feb 1 11:47:09.873: INFO: Pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.96566968s Feb 1 11:47:11.892: INFO: Pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.983804377s Feb 1 11:47:13.908: INFO: Pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.000673663s STEP: Saw pod success Feb 1 11:47:13.908: INFO: Pod "downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:47:13.919: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 11:47:14.069: INFO: Waiting for pod downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005 to disappear Feb 1 11:47:14.180: INFO: Pod downwardapi-volume-901637c0-44e8-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:47:14.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hj544" for this suite. Feb 1 11:47:20.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:47:20.587: INFO: namespace: e2e-tests-downward-api-hj544, resource: bindings, ignored listing per whitelist Feb 1 11:47:21.575: INFO: namespace e2e-tests-downward-api-hj544 deletion completed in 7.349942517s • [SLOW TEST:18.930 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:47:21.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-j7qm6 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-j7qm6 to expose endpoints map[] Feb 1 11:47:21.873: INFO: Get endpoints failed (9.09599ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 1 11:47:22.889: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-j7qm6 exposes endpoints map[] (1.025077834s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-j7qm6 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-j7qm6 to expose endpoints map[pod1:[100]] Feb 1 11:47:27.020: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.106031815s elapsed, will retry) Feb 1 11:47:31.222: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-j7qm6 exposes endpoints map[pod1:[100]] (8.308808658s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-j7qm6 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-j7qm6 to expose endpoints map[pod1:[100] pod2:[101]] Feb 1 11:47:35.632: INFO: Unexpected endpoints: found map[9c06d42c-44e8-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.397819355s elapsed, will retry) Feb 1 11:47:40.192: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-j7qm6 exposes endpoints map[pod1:[100] pod2:[101]] (8.957869438s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-j7qm6 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-j7qm6 to expose endpoints map[pod2:[101]] Feb 1 11:47:41.408: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-j7qm6 exposes endpoints map[pod2:[101]] (1.184329442s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-j7qm6 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-j7qm6 to expose endpoints map[] Feb 1 11:47:42.721: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-j7qm6 exposes endpoints map[] (1.294391944s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:47:43.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-j7qm6" for this suite. Feb 1 11:48:08.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:48:08.164: INFO: namespace: e2e-tests-services-j7qm6, resource: bindings, ignored listing per whitelist Feb 1 11:48:08.227: INFO: namespace e2e-tests-services-j7qm6 deletion completed in 24.243549025s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:46.652 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:48:08.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 1 11:48:08.447: INFO: Waiting up to 5m0s for pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-nt6qf" to be "success or failure" Feb 1 11:48:08.487: INFO: Pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.926757ms Feb 1 11:48:10.703: INFO: Pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255599575s Feb 1 11:48:12.727: INFO: Pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280080756s Feb 1 11:48:14.744: INFO: Pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296843937s Feb 1 11:48:16.755: INFO: Pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.30770119s Feb 1 11:48:18.769: INFO: Pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321560169s STEP: Saw pod success Feb 1 11:48:18.769: INFO: Pod "pod-b72ae320-44e8-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:48:18.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b72ae320-44e8-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 11:48:18.843: INFO: Waiting for pod pod-b72ae320-44e8-11ea-a88d-0242ac110005 to disappear Feb 1 11:48:18.927: INFO: Pod pod-b72ae320-44e8-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:48:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nt6qf" for this suite. Feb 1 11:48:26.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:48:26.558: INFO: namespace: e2e-tests-emptydir-nt6qf, resource: bindings, ignored listing per whitelist Feb 1 11:48:26.659: INFO: namespace e2e-tests-emptydir-nt6qf deletion completed in 7.711704597s • [SLOW TEST:18.432 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:48:26.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-6tx5x Feb 1 11:48:37.083: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-6tx5x STEP: checking the pod's current state and verifying that restartCount is present Feb 1 11:48:37.090: INFO: Initial restart count of pod liveness-http is 0 Feb 1 11:48:57.964: INFO: Restart count of pod e2e-tests-container-probe-6tx5x/liveness-http is now 1 (20.873466377s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:48:58.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-6tx5x" for this suite. Feb 1 11:49:04.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:49:04.358: INFO: namespace: e2e-tests-container-probe-6tx5x, resource: bindings, ignored listing per whitelist Feb 1 11:49:04.382: INFO: namespace e2e-tests-container-probe-6tx5x deletion completed in 6.260996657s • [SLOW TEST:37.722 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:49:04.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-qb245 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-qb245 STEP: Deleting pre-stop pod Feb 1 11:49:30.003: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:49:30.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-qb245" for this suite. Feb 1 11:50:16.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:50:16.339: INFO: namespace: e2e-tests-prestop-qb245, resource: bindings, ignored listing per whitelist Feb 1 11:50:16.359: INFO: namespace e2e-tests-prestop-qb245 deletion completed in 46.263177424s • [SLOW TEST:71.977 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:50:16.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 11:50:16.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-qfzjl" to be "success or failure" Feb 1 11:50:16.776: INFO: Pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.166677ms Feb 1 11:50:18.841: INFO: Pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096196522s Feb 1 11:50:20.851: INFO: Pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106391218s Feb 1 11:50:22.915: INFO: Pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170298318s Feb 1 11:50:24.967: INFO: Pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221923495s Feb 1 11:50:27.010: INFO: Pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.26535191s STEP: Saw pod success Feb 1 11:50:27.010: INFO: Pod "downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 11:50:27.020: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 11:50:27.172: INFO: Waiting for pod downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005 to disappear Feb 1 11:50:27.182: INFO: Pod downwardapi-volume-039b938c-44e9-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:50:27.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qfzjl" for this suite. Feb 1 11:50:33.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:50:33.312: INFO: namespace: e2e-tests-downward-api-qfzjl, resource: bindings, ignored listing per whitelist Feb 1 11:50:33.431: INFO: namespace e2e-tests-downward-api-qfzjl deletion completed in 6.238233788s • [SLOW TEST:17.072 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:50:33.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-p94d4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p94d4 to expose endpoints map[] Feb 1 11:50:33.813: INFO: Get endpoints failed (88.354975ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 1 11:50:34.828: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p94d4 exposes endpoints map[] (1.103564497s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-p94d4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p94d4 to expose endpoints map[pod1:[80]] Feb 1 11:50:39.431: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.580382415s elapsed, will retry) Feb 1 11:50:45.460: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p94d4 exposes endpoints map[pod1:[80]] (10.608800859s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-p94d4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p94d4 to expose endpoints map[pod1:[80] pod2:[80]] Feb 1 11:50:50.282: INFO: Unexpected endpoints: found map[0e6e6731-44e9-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.748869686s elapsed, will retry) Feb 1 11:50:54.394: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p94d4 exposes endpoints map[pod1:[80] pod2:[80]] (8.861196033s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-p94d4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p94d4 to expose endpoints map[pod2:[80]] Feb 1 11:50:55.611: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p94d4 exposes endpoints map[pod2:[80]] (1.208237784s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-p94d4 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p94d4 to expose endpoints map[] Feb 1 11:50:56.782: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p94d4 exposes endpoints map[] (1.161312858s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:50:57.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-p94d4" for this suite. Feb 1 11:51:06.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:51:06.203: INFO: namespace: e2e-tests-services-p94d4, resource: bindings, ignored listing per whitelist Feb 1 11:51:06.263: INFO: namespace e2e-tests-services-p94d4 deletion completed in 8.241643677s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.832 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:51:06.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-mdxks Feb 1 11:51:16.599: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-mdxks STEP: checking the pod's current state and verifying that restartCount is present Feb 1 11:51:16.650: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:55:18.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mdxks" for this suite. Feb 1 11:55:26.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:55:26.585: INFO: namespace: e2e-tests-container-probe-mdxks, resource: bindings, ignored listing per whitelist Feb 1 11:55:26.747: INFO: namespace e2e-tests-container-probe-mdxks deletion completed in 8.391955112s • [SLOW TEST:260.484 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:55:26.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rckxk Feb 1 11:55:36.982: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rckxk STEP: checking the pod's current state and verifying that restartCount is present Feb 1 11:55:36.986: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 11:59:37.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rckxk" for this suite. Feb 1 11:59:44.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 11:59:44.173: INFO: namespace: e2e-tests-container-probe-rckxk, resource: bindings, ignored listing per whitelist Feb 1 11:59:44.227: INFO: namespace e2e-tests-container-probe-rckxk deletion completed in 6.427621769s • [SLOW TEST:257.479 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 11:59:44.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vfqbs [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vfqbs STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vfqbs Feb 1 11:59:44.467: INFO: Found 0 stateful pods, waiting for 1 Feb 1 11:59:54.513: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 1 11:59:54.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 11:59:55.453: INFO: stderr: "I0201 11:59:54.965267 1246 log.go:172] (0xc000138790) (0xc000665400) Create stream\nI0201 11:59:54.966085 1246 log.go:172] (0xc000138790) (0xc000665400) Stream added, broadcasting: 1\nI0201 11:59:54.973501 1246 log.go:172] (0xc000138790) Reply frame received for 1\nI0201 11:59:54.973540 1246 log.go:172] (0xc000138790) (0xc0006654a0) Create stream\nI0201 11:59:54.973549 1246 log.go:172] (0xc000138790) (0xc0006654a0) Stream added, broadcasting: 3\nI0201 11:59:54.974404 1246 log.go:172] (0xc000138790) Reply frame received for 3\nI0201 11:59:54.974429 1246 log.go:172] (0xc000138790) (0xc000665540) Create stream\nI0201 11:59:54.974434 1246 log.go:172] (0xc000138790) (0xc000665540) Stream added, broadcasting: 5\nI0201 11:59:54.975539 1246 log.go:172] (0xc000138790) Reply frame received for 5\nI0201 11:59:55.223388 1246 log.go:172] (0xc000138790) Data frame received for 3\nI0201 11:59:55.223495 1246 log.go:172] (0xc0006654a0) (3) Data frame handling\nI0201 11:59:55.223523 1246 log.go:172] (0xc0006654a0) (3) Data frame sent\nI0201 11:59:55.433683 1246 log.go:172] (0xc000138790) (0xc000665540) Stream removed, broadcasting: 5\nI0201 11:59:55.434118 1246 log.go:172] (0xc000138790) Data frame received for 1\nI0201 11:59:55.434180 1246 log.go:172] (0xc000138790) (0xc0006654a0) Stream removed, broadcasting: 3\nI0201 11:59:55.434262 1246 log.go:172] (0xc000665400) (1) Data frame handling\nI0201 11:59:55.434343 1246 log.go:172] (0xc000665400) (1) Data frame sent\nI0201 11:59:55.434376 1246 log.go:172] (0xc000138790) (0xc000665400) Stream removed, broadcasting: 1\nI0201 11:59:55.434512 1246 log.go:172] (0xc000138790) Go away received\nI0201 11:59:55.436530 1246 log.go:172] (0xc000138790) (0xc000665400) Stream removed, broadcasting: 1\nI0201 11:59:55.436619 1246 log.go:172] (0xc000138790) (0xc0006654a0) Stream removed, broadcasting: 3\nI0201 11:59:55.436645 1246 log.go:172] (0xc000138790) (0xc000665540) Stream removed, broadcasting: 5\n" Feb 1 11:59:55.454: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 11:59:55.454: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 11:59:55.483: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 1 12:00:05.497: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 1 12:00:05.497: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 12:00:05.605: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999971s Feb 1 12:00:06.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.916841251s Feb 1 12:00:07.653: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.887117944s Feb 1 12:00:08.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.868623148s Feb 1 12:00:09.713: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.828990823s Feb 1 12:00:10.766: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.80910444s Feb 1 12:00:11.784: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.756069737s Feb 1 12:00:12.798: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.737839757s Feb 1 12:00:13.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.724224309s Feb 1 12:00:14.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 706.48192ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vfqbs Feb 1 12:00:15.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:00:16.964: INFO: stderr: "I0201 12:00:16.311479 1268 log.go:172] (0xc0001386e0) (0xc0007ab540) Create stream\nI0201 12:00:16.312262 1268 log.go:172] (0xc0001386e0) (0xc0007ab540) Stream added, broadcasting: 1\nI0201 12:00:16.327412 1268 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0201 12:00:16.327563 1268 log.go:172] (0xc0001386e0) (0xc0007ab5e0) Create stream\nI0201 12:00:16.327588 1268 log.go:172] (0xc0001386e0) (0xc0007ab5e0) Stream added, broadcasting: 3\nI0201 12:00:16.329513 1268 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0201 12:00:16.329609 1268 log.go:172] (0xc0001386e0) (0xc000746000) Create stream\nI0201 12:00:16.329690 1268 log.go:172] (0xc0001386e0) (0xc000746000) Stream added, broadcasting: 5\nI0201 12:00:16.331125 1268 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0201 12:00:16.688259 1268 log.go:172] (0xc0001386e0) Data frame received for 3\nI0201 12:00:16.688688 1268 log.go:172] (0xc0007ab5e0) (3) Data frame handling\nI0201 12:00:16.688791 1268 log.go:172] (0xc0007ab5e0) (3) Data frame sent\nI0201 12:00:16.948947 1268 log.go:172] (0xc0001386e0) (0xc0007ab5e0) Stream removed, broadcasting: 3\nI0201 12:00:16.949425 1268 log.go:172] (0xc0001386e0) Data frame received for 1\nI0201 12:00:16.949889 1268 log.go:172] (0xc0001386e0) (0xc000746000) Stream removed, broadcasting: 5\nI0201 12:00:16.950095 1268 log.go:172] (0xc0007ab540) (1) Data frame handling\nI0201 12:00:16.950194 1268 log.go:172] (0xc0007ab540) (1) Data frame sent\nI0201 12:00:16.950325 1268 log.go:172] (0xc0001386e0) (0xc0007ab540) Stream removed, broadcasting: 1\nI0201 12:00:16.950489 1268 log.go:172] (0xc0001386e0) Go away received\nI0201 12:00:16.951710 1268 log.go:172] (0xc0001386e0) (0xc0007ab540) Stream removed, broadcasting: 1\nI0201 12:00:16.951810 1268 log.go:172] (0xc0001386e0) (0xc0007ab5e0) Stream removed, broadcasting: 3\nI0201 12:00:16.951824 1268 log.go:172] (0xc0001386e0) (0xc000746000) Stream removed, broadcasting: 5\n" Feb 1 12:00:16.965: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 12:00:16.965: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 12:00:16.981: INFO: Found 1 stateful pods, waiting for 3 Feb 1 12:00:27.000: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:00:27.000: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:00:27.000: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 1 12:00:37.006: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:00:37.006: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:00:37.006: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 1 12:00:37.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 12:00:37.593: INFO: stderr: "I0201 12:00:37.269374 1290 log.go:172] (0xc000152790) (0xc00065d220) Create stream\nI0201 12:00:37.269920 1290 log.go:172] (0xc000152790) (0xc00065d220) Stream added, broadcasting: 1\nI0201 12:00:37.276302 1290 log.go:172] (0xc000152790) Reply frame received for 1\nI0201 12:00:37.276346 1290 log.go:172] (0xc000152790) (0xc00072e000) Create stream\nI0201 12:00:37.276359 1290 log.go:172] (0xc000152790) (0xc00072e000) Stream added, broadcasting: 3\nI0201 12:00:37.277573 1290 log.go:172] (0xc000152790) Reply frame received for 3\nI0201 12:00:37.277591 1290 log.go:172] (0xc000152790) (0xc00072e140) Create stream\nI0201 12:00:37.277596 1290 log.go:172] (0xc000152790) (0xc00072e140) Stream added, broadcasting: 5\nI0201 12:00:37.279577 1290 log.go:172] (0xc000152790) Reply frame received for 5\nI0201 12:00:37.429898 1290 log.go:172] (0xc000152790) Data frame received for 3\nI0201 12:00:37.429970 1290 log.go:172] (0xc00072e000) (3) Data frame handling\nI0201 12:00:37.429995 1290 log.go:172] (0xc00072e000) (3) Data frame sent\nI0201 12:00:37.575411 1290 log.go:172] (0xc000152790) (0xc00072e140) Stream removed, broadcasting: 5\nI0201 12:00:37.575711 1290 log.go:172] (0xc000152790) Data frame received for 1\nI0201 12:00:37.575803 1290 log.go:172] (0xc000152790) (0xc00072e000) Stream removed, broadcasting: 3\nI0201 12:00:37.575883 1290 log.go:172] (0xc00065d220) (1) Data frame handling\nI0201 12:00:37.575927 1290 log.go:172] (0xc00065d220) (1) Data frame sent\nI0201 12:00:37.575943 1290 log.go:172] (0xc000152790) (0xc00065d220) Stream removed, broadcasting: 1\nI0201 12:00:37.575980 1290 log.go:172] (0xc000152790) Go away received\nI0201 12:00:37.577130 1290 log.go:172] (0xc000152790) (0xc00065d220) Stream removed, broadcasting: 1\nI0201 12:00:37.577145 1290 log.go:172] (0xc000152790) (0xc00072e000) Stream removed, broadcasting: 3\nI0201 12:00:37.577151 1290 log.go:172] (0xc000152790) (0xc00072e140) Stream removed, broadcasting: 5\n" Feb 1 12:00:37.593: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 12:00:37.594: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 12:00:37.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 12:00:38.196: INFO: stderr: "I0201 12:00:37.772377 1312 log.go:172] (0xc0006a6370) (0xc0006ca640) Create stream\nI0201 12:00:37.772754 1312 log.go:172] (0xc0006a6370) (0xc0006ca640) Stream added, broadcasting: 1\nI0201 12:00:37.776680 1312 log.go:172] (0xc0006a6370) Reply frame received for 1\nI0201 12:00:37.776710 1312 log.go:172] (0xc0006a6370) (0xc00065ec80) Create stream\nI0201 12:00:37.776719 1312 log.go:172] (0xc0006a6370) (0xc00065ec80) Stream added, broadcasting: 3\nI0201 12:00:37.777767 1312 log.go:172] (0xc0006a6370) Reply frame received for 3\nI0201 12:00:37.777796 1312 log.go:172] (0xc0006a6370) (0xc00020c000) Create stream\nI0201 12:00:37.777810 1312 log.go:172] (0xc0006a6370) (0xc00020c000) Stream added, broadcasting: 5\nI0201 12:00:37.778688 1312 log.go:172] (0xc0006a6370) Reply frame received for 5\nI0201 12:00:37.962543 1312 log.go:172] (0xc0006a6370) Data frame received for 3\nI0201 12:00:37.962718 1312 log.go:172] (0xc00065ec80) (3) Data frame handling\nI0201 12:00:37.962746 1312 log.go:172] (0xc00065ec80) (3) Data frame sent\nI0201 12:00:38.180170 1312 log.go:172] (0xc0006a6370) Data frame received for 1\nI0201 12:00:38.180302 1312 log.go:172] (0xc0006a6370) (0xc00065ec80) Stream removed, broadcasting: 3\nI0201 12:00:38.180452 1312 log.go:172] (0xc0006a6370) (0xc00020c000) Stream removed, broadcasting: 5\nI0201 12:00:38.180480 1312 log.go:172] (0xc0006ca640) (1) Data frame handling\nI0201 12:00:38.180515 1312 log.go:172] (0xc0006ca640) (1) Data frame sent\nI0201 12:00:38.180568 1312 log.go:172] (0xc0006a6370) (0xc0006ca640) Stream removed, broadcasting: 1\nI0201 12:00:38.180621 1312 log.go:172] (0xc0006a6370) Go away received\nI0201 12:00:38.181458 1312 log.go:172] (0xc0006a6370) (0xc0006ca640) Stream removed, broadcasting: 1\nI0201 12:00:38.181520 1312 log.go:172] (0xc0006a6370) (0xc00065ec80) Stream removed, broadcasting: 3\nI0201 12:00:38.181557 1312 log.go:172] (0xc0006a6370) (0xc00020c000) Stream removed, broadcasting: 5\n" Feb 1 12:00:38.196: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 12:00:38.196: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 12:00:38.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 12:00:38.705: INFO: stderr: "I0201 12:00:38.403310 1334 log.go:172] (0xc0007ac370) (0xc000647400) Create stream\nI0201 12:00:38.403638 1334 log.go:172] (0xc0007ac370) (0xc000647400) Stream added, broadcasting: 1\nI0201 12:00:38.407866 1334 log.go:172] (0xc0007ac370) Reply frame received for 1\nI0201 12:00:38.407908 1334 log.go:172] (0xc0007ac370) (0xc0006474a0) Create stream\nI0201 12:00:38.407929 1334 log.go:172] (0xc0007ac370) (0xc0006474a0) Stream added, broadcasting: 3\nI0201 12:00:38.409311 1334 log.go:172] (0xc0007ac370) Reply frame received for 3\nI0201 12:00:38.409393 1334 log.go:172] (0xc0007ac370) (0xc0004ea000) Create stream\nI0201 12:00:38.409406 1334 log.go:172] (0xc0007ac370) (0xc0004ea000) Stream added, broadcasting: 5\nI0201 12:00:38.411094 1334 log.go:172] (0xc0007ac370) Reply frame received for 5\nI0201 12:00:38.604649 1334 log.go:172] (0xc0007ac370) Data frame received for 3\nI0201 12:00:38.604716 1334 log.go:172] (0xc0006474a0) (3) Data frame handling\nI0201 12:00:38.604744 1334 log.go:172] (0xc0006474a0) (3) Data frame sent\nI0201 12:00:38.694946 1334 log.go:172] (0xc0007ac370) Data frame received for 1\nI0201 12:00:38.695121 1334 log.go:172] (0xc0007ac370) (0xc0006474a0) Stream removed, broadcasting: 3\nI0201 12:00:38.695214 1334 log.go:172] (0xc000647400) (1) Data frame handling\nI0201 12:00:38.695267 1334 log.go:172] (0xc000647400) (1) Data frame sent\nI0201 12:00:38.695323 1334 log.go:172] (0xc0007ac370) (0xc0004ea000) Stream removed, broadcasting: 5\nI0201 12:00:38.695414 1334 log.go:172] (0xc0007ac370) (0xc000647400) Stream removed, broadcasting: 1\nI0201 12:00:38.695500 1334 log.go:172] (0xc0007ac370) Go away received\nI0201 12:00:38.696254 1334 log.go:172] (0xc0007ac370) (0xc000647400) Stream removed, broadcasting: 1\nI0201 12:00:38.696270 1334 log.go:172] (0xc0007ac370) (0xc0006474a0) Stream removed, broadcasting: 3\nI0201 12:00:38.696277 1334 log.go:172] (0xc0007ac370) (0xc0004ea000) Stream removed, broadcasting: 5\n" Feb 1 12:00:38.705: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 12:00:38.705: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 12:00:38.705: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 12:00:38.714: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 1 12:00:48.841: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 1 12:00:48.841: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 1 12:00:48.841: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 1 12:00:48.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999717s Feb 1 12:00:49.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973683076s Feb 1 12:00:50.983: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.947364054s Feb 1 12:00:52.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.887370396s Feb 1 12:00:53.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.848775707s Feb 1 12:00:54.056: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.838159253s Feb 1 12:00:55.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.814905836s Feb 1 12:00:56.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.794833032s Feb 1 12:00:57.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.736803161s Feb 1 12:00:58.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 723.19995ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vfqbs Feb 1 12:00:59.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:00:59.893: INFO: stderr: "I0201 12:00:59.533939 1355 log.go:172] (0xc0006e80b0) (0xc00070a5a0) Create stream\nI0201 12:00:59.534511 1355 log.go:172] (0xc0006e80b0) (0xc00070a5a0) Stream added, broadcasting: 1\nI0201 12:00:59.543158 1355 log.go:172] (0xc0006e80b0) Reply frame received for 1\nI0201 12:00:59.543281 1355 log.go:172] (0xc0006e80b0) (0xc00070a640) Create stream\nI0201 12:00:59.543309 1355 log.go:172] (0xc0006e80b0) (0xc00070a640) Stream added, broadcasting: 3\nI0201 12:00:59.545021 1355 log.go:172] (0xc0006e80b0) Reply frame received for 3\nI0201 12:00:59.545078 1355 log.go:172] (0xc0006e80b0) (0xc0008800a0) Create stream\nI0201 12:00:59.545097 1355 log.go:172] (0xc0006e80b0) (0xc0008800a0) Stream added, broadcasting: 5\nI0201 12:00:59.548899 1355 log.go:172] (0xc0006e80b0) Reply frame received for 5\nI0201 12:00:59.711805 1355 log.go:172] (0xc0006e80b0) Data frame received for 3\nI0201 12:00:59.711956 1355 log.go:172] (0xc00070a640) (3) Data frame handling\nI0201 12:00:59.712005 1355 log.go:172] (0xc00070a640) (3) Data frame sent\nI0201 12:00:59.875829 1355 log.go:172] (0xc0006e80b0) Data frame received for 1\nI0201 12:00:59.876160 1355 log.go:172] (0xc00070a5a0) (1) Data frame handling\nI0201 12:00:59.876295 1355 log.go:172] (0xc00070a5a0) (1) Data frame sent\nI0201 12:00:59.877097 1355 log.go:172] (0xc0006e80b0) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0201 12:00:59.877320 1355 log.go:172] (0xc0006e80b0) (0xc0008800a0) Stream removed, broadcasting: 5\nI0201 12:00:59.877418 1355 log.go:172] (0xc0006e80b0) (0xc00070a640) Stream removed, broadcasting: 3\nI0201 12:00:59.877446 1355 log.go:172] (0xc0006e80b0) Go away received\nI0201 12:00:59.878083 1355 log.go:172] (0xc0006e80b0) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0201 12:00:59.878110 1355 log.go:172] (0xc0006e80b0) (0xc00070a640) Stream removed, broadcasting: 3\nI0201 12:00:59.878133 1355 log.go:172] (0xc0006e80b0) (0xc0008800a0) Stream removed, broadcasting: 5\n" Feb 1 12:00:59.893: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 12:00:59.893: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 12:00:59.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:01:00.995: INFO: stderr: "I0201 12:01:00.600706 1376 log.go:172] (0xc000150790) (0xc0006b12c0) Create stream\nI0201 12:01:00.601111 1376 log.go:172] (0xc000150790) (0xc0006b12c0) Stream added, broadcasting: 1\nI0201 12:01:00.612167 1376 log.go:172] (0xc000150790) Reply frame received for 1\nI0201 12:01:00.612211 1376 log.go:172] (0xc000150790) (0xc000734000) Create stream\nI0201 12:01:00.612219 1376 log.go:172] (0xc000150790) (0xc000734000) Stream added, broadcasting: 3\nI0201 12:01:00.613799 1376 log.go:172] (0xc000150790) Reply frame received for 3\nI0201 12:01:00.613826 1376 log.go:172] (0xc000150790) (0xc0005d2000) Create stream\nI0201 12:01:00.613834 1376 log.go:172] (0xc000150790) (0xc0005d2000) Stream added, broadcasting: 5\nI0201 12:01:00.616217 1376 log.go:172] (0xc000150790) Reply frame received for 5\nI0201 12:01:00.750021 1376 log.go:172] (0xc000150790) Data frame received for 3\nI0201 12:01:00.750197 1376 log.go:172] (0xc000734000) (3) Data frame handling\nI0201 12:01:00.750260 1376 log.go:172] (0xc000734000) (3) Data frame sent\nI0201 12:01:00.973771 1376 log.go:172] (0xc000150790) Data frame received for 1\nI0201 12:01:00.974054 1376 log.go:172] (0xc0006b12c0) (1) Data frame handling\nI0201 12:01:00.974080 1376 log.go:172] (0xc0006b12c0) (1) Data frame sent\nI0201 12:01:00.974112 1376 log.go:172] (0xc000150790) (0xc0006b12c0) Stream removed, broadcasting: 1\nI0201 12:01:00.981887 1376 log.go:172] (0xc000150790) (0xc000734000) Stream removed, broadcasting: 3\nI0201 12:01:00.982101 1376 log.go:172] (0xc000150790) (0xc0005d2000) Stream removed, broadcasting: 5\nI0201 12:01:00.982208 1376 log.go:172] (0xc000150790) (0xc0006b12c0) Stream removed, broadcasting: 1\nI0201 12:01:00.982248 1376 log.go:172] (0xc000150790) (0xc000734000) Stream removed, broadcasting: 3\nI0201 12:01:00.982280 1376 log.go:172] (0xc000150790) (0xc0005d2000) Stream removed, broadcasting: 5\n" Feb 1 12:01:00.996: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 12:01:00.996: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 12:01:00.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:01:01.662: INFO: rc: 126 Feb 1 12:01:01.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] cannot exec in a stopped state: unknown I0201 12:01:01.595483 1397 log.go:172] (0xc000138160) (0xc000768e60) Create stream I0201 12:01:01.595823 1397 log.go:172] (0xc000138160) (0xc000768e60) Stream added, broadcasting: 1 I0201 12:01:01.606064 1397 log.go:172] (0xc000138160) Reply frame received for 1 I0201 12:01:01.606143 1397 log.go:172] (0xc000138160) (0xc00056c000) Create stream I0201 12:01:01.606168 1397 log.go:172] (0xc000138160) (0xc00056c000) Stream added, broadcasting: 3 I0201 12:01:01.607244 1397 log.go:172] (0xc000138160) Reply frame received for 3 I0201 12:01:01.607281 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Create stream I0201 12:01:01.607294 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Stream added, broadcasting: 5 I0201 12:01:01.609454 1397 log.go:172] (0xc000138160) Reply frame received for 5 I0201 12:01:01.642324 1397 log.go:172] (0xc000138160) Data frame received for 3 I0201 12:01:01.642362 1397 log.go:172] (0xc00056c000) (3) Data frame handling I0201 12:01:01.642384 1397 log.go:172] (0xc00056c000) (3) Data frame sent I0201 12:01:01.646122 1397 log.go:172] (0xc000138160) Data frame received for 1 I0201 12:01:01.646197 1397 log.go:172] (0xc000768e60) (1) Data frame handling I0201 12:01:01.646239 1397 log.go:172] (0xc000768e60) (1) Data frame sent I0201 12:01:01.646327 1397 log.go:172] (0xc000138160) (0xc00056c000) Stream removed, broadcasting: 3 I0201 12:01:01.646529 1397 log.go:172] (0xc000138160) (0xc000768e60) Stream removed, broadcasting: 1 I0201 12:01:01.647716 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Stream removed, broadcasting: 5 I0201 12:01:01.647830 1397 log.go:172] (0xc000138160) Go away received I0201 12:01:01.647985 1397 log.go:172] (0xc000138160) (0xc000768e60) Stream removed, broadcasting: 1 I0201 12:01:01.648060 1397 log.go:172] (0xc000138160) (0xc00056c000) Stream removed, broadcasting: 3 I0201 12:01:01.648079 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc002269800 exit status 126 true [0xc0016b2118 0xc0016b2130 0xc0016b2148] [0xc0016b2118 0xc0016b2130 0xc0016b2148] [0xc0016b2128 0xc0016b2140] [0x935700 0x935700] 0xc000870300 }: Command stdout: cannot exec in a stopped state: unknown stderr: I0201 12:01:01.595483 1397 log.go:172] (0xc000138160) (0xc000768e60) Create stream I0201 12:01:01.595823 1397 log.go:172] (0xc000138160) (0xc000768e60) Stream added, broadcasting: 1 I0201 12:01:01.606064 1397 log.go:172] (0xc000138160) Reply frame received for 1 I0201 12:01:01.606143 1397 log.go:172] (0xc000138160) (0xc00056c000) Create stream I0201 12:01:01.606168 1397 log.go:172] (0xc000138160) (0xc00056c000) Stream added, broadcasting: 3 I0201 12:01:01.607244 1397 log.go:172] (0xc000138160) Reply frame received for 3 I0201 12:01:01.607281 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Create stream I0201 12:01:01.607294 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Stream added, broadcasting: 5 I0201 12:01:01.609454 1397 log.go:172] (0xc000138160) Reply frame received for 5 I0201 12:01:01.642324 1397 log.go:172] (0xc000138160) Data frame received for 3 I0201 12:01:01.642362 1397 log.go:172] (0xc00056c000) (3) Data frame handling I0201 12:01:01.642384 1397 log.go:172] (0xc00056c000) (3) Data frame sent I0201 12:01:01.646122 1397 log.go:172] (0xc000138160) Data frame received for 1 I0201 12:01:01.646197 1397 log.go:172] (0xc000768e60) (1) Data frame handling I0201 12:01:01.646239 1397 log.go:172] (0xc000768e60) (1) Data frame sent I0201 12:01:01.646327 1397 log.go:172] (0xc000138160) (0xc00056c000) Stream removed, broadcasting: 3 I0201 12:01:01.646529 1397 log.go:172] (0xc000138160) (0xc000768e60) Stream removed, broadcasting: 1 I0201 12:01:01.647716 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Stream removed, broadcasting: 5 I0201 12:01:01.647830 1397 log.go:172] (0xc000138160) Go away received I0201 12:01:01.647985 1397 log.go:172] (0xc000138160) (0xc000768e60) Stream removed, broadcasting: 1 I0201 12:01:01.648060 1397 log.go:172] (0xc000138160) (0xc00056c000) Stream removed, broadcasting: 3 I0201 12:01:01.648079 1397 log.go:172] (0xc000138160) (0xc00056c0a0) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Feb 1 12:01:11.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:01:11.788: INFO: rc: 1 Feb 1 12:01:11.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002269920 exit status 1 true [0xc0016b2150 0xc0016b2190 0xc0016b21c0] [0xc0016b2150 0xc0016b2190 0xc0016b21c0] [0xc0016b2188 0xc0016b21a0] [0x935700 0x935700] 0xc0008705a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:01:21.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:01:21.943: INFO: rc: 1 Feb 1 12:01:21.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00145def0 exit status 1 true [0xc0015500c8 0xc0015500e0 0xc0015500f8] [0xc0015500c8 0xc0015500e0 0xc0015500f8] [0xc0015500d8 0xc0015500f0] [0x935700 0x935700] 0xc001b57a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:01:31.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:01:32.097: INFO: rc: 1 Feb 1 12:01:32.097: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002269a40 exit status 1 true [0xc0016b21d8 0xc0016b21f0 0xc0016b2230] [0xc0016b21d8 0xc0016b21f0 0xc0016b2230] [0xc0016b21e8 0xc0016b2228] [0x935700 0x935700] 0xc000870900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:01:42.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:01:42.225: INFO: rc: 1 Feb 1 12:01:42.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002269b90 exit status 1 true [0xc0016b2250 0xc0016b22a0 0xc0016b22d0] [0xc0016b2250 0xc0016b22a0 0xc0016b22d0] [0xc0016b2288 0xc0016b22b0] [0x935700 0x935700] 0xc0008715c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:01:52.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:01:52.333: INFO: rc: 1 Feb 1 12:01:52.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002269ce0 exit status 1 true [0xc0016b22e8 0xc0016b2300 0xc0016b2350] [0xc0016b22e8 0xc0016b2300 0xc0016b2350] [0xc0016b22f8 0xc0016b2338] [0x935700 0x935700] 0xc000871860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:02:02.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:02:02.447: INFO: rc: 1 Feb 1 12:02:02.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002269e30 exit status 1 true [0xc0016b2370 0xc0016b23c0 0xc0016b23d8] [0xc0016b2370 0xc0016b23c0 0xc0016b23d8] [0xc0016b23a8 0xc0016b23d0] [0x935700 0x935700] 0xc000871b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:02:12.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:02:12.593: INFO: rc: 1 Feb 1 12:02:12.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00089a060 exit status 1 true [0xc001550100 0xc001550118 0xc001550130] [0xc001550100 0xc001550118 0xc001550130] [0xc001550110 0xc001550128] [0x935700 0x935700] 0xc001b57ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:02:22.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:02:22.762: INFO: rc: 1 Feb 1 12:02:22.762: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002269f80 exit status 1 true [0xc0016b23e0 0xc0016b23f8 0xc0016b2420] [0xc0016b23e0 0xc0016b23f8 0xc0016b2420] [0xc0016b23f0 0xc0016b2408] [0x935700 0x935700] 0xc000871da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:02:32.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:02:32.867: INFO: rc: 1 Feb 1 12:02:32.867: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020c2f90 exit status 1 true [0xc0005b16b8 0xc0005b16f0 0xc0005b1768] [0xc0005b16b8 0xc0005b16f0 0xc0005b1768] [0xc0005b16d8 0xc0005b1748] [0x935700 0x935700] 0xc000b2b560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:02:42.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:02:43.083: INFO: rc: 1 Feb 1 12:02:43.083: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064a120 exit status 1 true [0xc0000e8288 0xc0016b2010 0xc0016b2028] [0xc0000e8288 0xc0016b2010 0xc0016b2028] [0xc0016b2008 0xc0016b2020] [0x935700 0x935700] 0xc0012301e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:02:53.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:02:53.307: INFO: rc: 1 Feb 1 12:02:53.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c7c120 exit status 1 true [0xc001e52000 0xc001e52018 0xc001e52030] [0xc001e52000 0xc001e52018 0xc001e52030] [0xc001e52010 0xc001e52028] [0x935700 0x935700] 0xc001927020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:03:03.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:03:03.486: INFO: rc: 1 Feb 1 12:03:03.486: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064a270 exit status 1 true [0xc0016b2030 0xc0016b2070 0xc0016b20a0] [0xc0016b2030 0xc0016b2070 0xc0016b20a0] [0xc0016b2068 0xc0016b2080] [0x935700 0x935700] 0xc001230600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:03:13.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:03:13.769: INFO: rc: 1 Feb 1 12:03:13.770: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c7c270 exit status 1 true [0xc001e52038 0xc001e52050 0xc001e52088] [0xc001e52038 0xc001e52050 0xc001e52088] [0xc001e52048 0xc001e52070] [0x935700 0x935700] 0xc00099a300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:03:23.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:03:24.030: INFO: rc: 1 Feb 1 12:03:24.030: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c7c7e0 exit status 1 true [0xc001e520a8 0xc001e520d0 0xc001e52108] [0xc001e520a8 0xc001e520d0 0xc001e52108] [0xc001e520c8 0xc001e520f0] [0x935700 0x935700] 0xc00099aa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:03:34.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:03:34.214: INFO: rc: 1 Feb 1 12:03:34.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00145c120 exit status 1 true [0xc0005b0038 0xc0005b0220 0xc0005b0250] [0xc0005b0038 0xc0005b0220 0xc0005b0250] [0xc0005b0188 0xc0005b0248] [0x935700 0x935700] 0xc001850ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:03:44.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:03:44.364: INFO: rc: 1 Feb 1 12:03:44.364: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00145c240 exit status 1 true [0xc0005b02c0 0xc0005b0550 0xc0005b0b38] [0xc0005b02c0 0xc0005b0550 0xc0005b0b38] [0xc0005b0528 0xc0005b06a8] [0x935700 0x935700] 0xc001851320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:03:54.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:03:54.567: INFO: rc: 1 Feb 1 12:03:54.567: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00145c390 exit status 1 true [0xc0005b0b90 0xc0005b0cd0 0xc0005b0e60] [0xc0005b0b90 0xc0005b0cd0 0xc0005b0e60] [0xc0005b0c88 0xc0005b0d88] [0x935700 0x935700] 0xc000870000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:04:04.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:04:04.764: INFO: rc: 1 Feb 1 12:04:04.764: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00145c4e0 exit status 1 true [0xc0005b0e68 0xc0005b0eb0 0xc0005b1030] [0xc0005b0e68 0xc0005b0eb0 0xc0005b1030] [0xc0005b0e90 0xc0005b0ff0] [0x935700 0x935700] 0xc0008702a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:04:14.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:04:14.931: INFO: rc: 1 Feb 1 12:04:14.931: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c7c9c0 exit status 1 true [0xc001e52110 0xc001e52128 0xc001e52160] [0xc001e52110 0xc001e52128 0xc001e52160] [0xc001e52120 0xc001e52158] [0x935700 0x935700] 0xc00099af60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:04:24.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:04:25.044: INFO: rc: 1 Feb 1 12:04:25.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064a3c0 exit status 1 true [0xc0016b20b8 0xc0016b20d0 0xc0016b2110] [0xc0016b20b8 0xc0016b20d0 0xc0016b2110] [0xc0016b20c8 0xc0016b2108] [0x935700 0x935700] 0xc0018721e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:04:35.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:04:35.206: INFO: rc: 1 Feb 1 12:04:35.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c7cc30 exit status 1 true [0xc001e52168 0xc001e52180 0xc001e521b0] [0xc001e52168 0xc001e52180 0xc001e521b0] [0xc001e52178 0xc001e52198] [0x935700 0x935700] 0xc00099b440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:04:45.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:04:45.367: INFO: rc: 1 Feb 1 12:04:45.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c7c150 exit status 1 true [0xc0000e8288 0xc001e52008 0xc001e52020] [0xc0000e8288 0xc001e52008 0xc001e52020] [0xc001e52000 0xc001e52018] [0x935700 0x935700] 0xc001850a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:04:55.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:04:55.519: INFO: rc: 1 Feb 1 12:04:55.519: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002268150 exit status 1 true [0xc001550008 0xc001550020 0xc001550038] [0xc001550008 0xc001550020 0xc001550038] [0xc001550018 0xc001550030] [0x935700 0x935700] 0xc001927020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:05:05.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:05:05.682: INFO: rc: 1 Feb 1 12:05:05.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064a150 exit status 1 true [0xc0016b2000 0xc0016b2018 0xc0016b2030] [0xc0016b2000 0xc0016b2018 0xc0016b2030] [0xc0016b2010 0xc0016b2028] [0x935700 0x935700] 0xc0012301e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:05:15.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:05:15.852: INFO: rc: 1 Feb 1 12:05:15.852: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00145c150 exit status 1 true [0xc0005b0038 0xc0005b0220 0xc0005b0250] [0xc0005b0038 0xc0005b0220 0xc0005b0250] [0xc0005b0188 0xc0005b0248] [0x935700 0x935700] 0xc00099a300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:05:25.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:05:26.060: INFO: rc: 1 Feb 1 12:05:26.061: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002268270 exit status 1 true [0xc001550040 0xc001550058 0xc001550070] [0xc001550040 0xc001550058 0xc001550070] [0xc001550050 0xc001550068] [0x935700 0x935700] 0xc0018721e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:05:36.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:05:36.214: INFO: rc: 1 Feb 1 12:05:36.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c7c5d0 exit status 1 true [0xc001e52028 0xc001e52040 0xc001e52060] [0xc001e52028 0xc001e52040 0xc001e52060] [0xc001e52038 0xc001e52050] [0x935700 0x935700] 0xc001851140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:05:46.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:05:46.376: INFO: rc: 1 Feb 1 12:05:46.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064a2d0 exit status 1 true [0xc0016b2050 0xc0016b2078 0xc0016b20b8] [0xc0016b2050 0xc0016b2078 0xc0016b20b8] [0xc0016b2070 0xc0016b20a0] [0x935700 0x935700] 0xc001230600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:05:56.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:05:56.549: INFO: rc: 1 Feb 1 12:05:56.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064a4e0 exit status 1 true [0xc0016b20c0 0xc0016b20f0 0xc0016b2118] [0xc0016b20c0 0xc0016b20f0 0xc0016b2118] [0xc0016b20d0 0xc0016b2110] [0x935700 0x935700] 0xc0008701e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 1 12:06:06.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vfqbs ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:06:06.679: INFO: rc: 1 Feb 1 12:06:06.679: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Feb 1 12:06:06.679: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 1 12:06:06.697: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vfqbs Feb 1 12:06:06.700: INFO: Scaling statefulset ss to 0 Feb 1 12:06:06.709: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 12:06:06.712: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:06:06.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vfqbs" for this suite. Feb 1 12:06:14.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:06:14.918: INFO: namespace: e2e-tests-statefulset-vfqbs, resource: bindings, ignored listing per whitelist Feb 1 12:06:15.038: INFO: namespace e2e-tests-statefulset-vfqbs deletion completed in 8.291283815s • [SLOW TEST:390.811 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:06:15.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005 Feb 1 12:06:15.322: INFO: Pod name my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005: Found 0 pods out of 1 Feb 1 12:06:20.406: INFO: Pod name my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005: Found 1 pods out of 1 Feb 1 12:06:20.406: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005" are running Feb 1 12:06:26.444: INFO: Pod "my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005-c4hrk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:06:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:06:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:06:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:06:15 +0000 UTC Reason: Message:}]) Feb 1 12:06:26.444: INFO: Trying to dial the pod Feb 1 12:06:31.604: INFO: Controller my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005: Got expected result from replica 1 [my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005-c4hrk]: "my-hostname-basic-3ef6b235-44eb-11ea-a88d-0242ac110005-c4hrk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:06:31.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-qpsbc" for this suite. Feb 1 12:06:37.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:06:37.789: INFO: namespace: e2e-tests-replication-controller-qpsbc, resource: bindings, ignored listing per whitelist Feb 1 12:06:37.902: INFO: namespace e2e-tests-replication-controller-qpsbc deletion completed in 6.278205808s • [SLOW TEST:22.863 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:06:37.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 1 12:06:50.252: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-4c8eaa20-44eb-11ea-a88d-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-blwvb", SelfLink:"/api/v1/namespaces/e2e-tests-pods-blwvb/pods/pod-submit-remove-4c8eaa20-44eb-11ea-a88d-0242ac110005", UID:"4c922227-44eb-11ea-a994-fa163e34d433", ResourceVersion:"20191143", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716155598, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"64300510"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-h94f2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000f82600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-h94f2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001514f68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0008700c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015150e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001515100)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001515108), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00151510c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716155598, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716155608, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716155608, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716155598, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002449ea0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002449ec0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://c74e0df498ac0a742b5060f595f9a4328d8c91be9d290a56ac62408ed76b5a60"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:07:02.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-blwvb" for this suite. Feb 1 12:07:08.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:07:08.866: INFO: namespace: e2e-tests-pods-blwvb, resource: bindings, ignored listing per whitelist Feb 1 12:07:08.932: INFO: namespace e2e-tests-pods-blwvb deletion completed in 6.26490323s • [SLOW TEST:31.030 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:07:08.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-5f0f5b4b-44eb-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 12:07:09.129: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-g9vjh" to be "success or failure" Feb 1 12:07:09.169: INFO: Pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.50826ms Feb 1 12:07:11.187: INFO: Pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05865189s Feb 1 12:07:13.197: INFO: Pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068797875s Feb 1 12:07:15.216: INFO: Pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086928546s Feb 1 12:07:17.283: INFO: Pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154608477s Feb 1 12:07:19.298: INFO: Pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168888691s STEP: Saw pod success Feb 1 12:07:19.298: INFO: Pod "pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:07:19.302: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 1 12:07:20.337: INFO: Waiting for pod pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:07:20.449: INFO: Pod pod-configmaps-5f10cb54-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:07:20.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-g9vjh" for this suite. Feb 1 12:07:26.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:07:26.772: INFO: namespace: e2e-tests-configmap-g9vjh, resource: bindings, ignored listing per whitelist Feb 1 12:07:26.774: INFO: namespace e2e-tests-configmap-g9vjh deletion completed in 6.285674263s • [SLOW TEST:17.841 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:07:26.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:07:27.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xkt2q" for this suite. Feb 1 12:07:33.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:07:33.728: INFO: namespace: e2e-tests-kubelet-test-xkt2q, resource: bindings, ignored listing per whitelist Feb 1 12:07:33.830: INFO: namespace e2e-tests-kubelet-test-xkt2q deletion completed in 6.447019611s • [SLOW TEST:7.056 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:07:33.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 12:07:34.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-t7m6f" to be "success or failure" Feb 1 12:07:34.123: INFO: Pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.853801ms Feb 1 12:07:36.138: INFO: Pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053061016s Feb 1 12:07:38.158: INFO: Pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073026394s Feb 1 12:07:40.177: INFO: Pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091629268s Feb 1 12:07:42.198: INFO: Pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112728844s Feb 1 12:07:44.223: INFO: Pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1378018s STEP: Saw pod success Feb 1 12:07:44.223: INFO: Pod "downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:07:44.229: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 12:07:44.463: INFO: Waiting for pod downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:07:44.475: INFO: Pod downwardapi-volume-6def056b-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:07:44.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t7m6f" for this suite. Feb 1 12:07:50.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:07:50.640: INFO: namespace: e2e-tests-projected-t7m6f, resource: bindings, ignored listing per whitelist Feb 1 12:07:50.677: INFO: namespace e2e-tests-projected-t7m6f deletion completed in 6.194777286s • [SLOW TEST:16.846 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:07:50.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 1 12:07:51.102: INFO: Number of nodes with available pods: 0 Feb 1 12:07:51.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:52.128: INFO: Number of nodes with available pods: 0 Feb 1 12:07:52.128: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:53.126: INFO: Number of nodes with available pods: 0 Feb 1 12:07:53.126: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:54.144: INFO: Number of nodes with available pods: 0 Feb 1 12:07:54.144: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:55.123: INFO: Number of nodes with available pods: 0 Feb 1 12:07:55.123: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:56.813: INFO: Number of nodes with available pods: 0 Feb 1 12:07:56.813: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:57.118: INFO: Number of nodes with available pods: 0 Feb 1 12:07:57.118: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:58.132: INFO: Number of nodes with available pods: 0 Feb 1 12:07:58.132: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:07:59.128: INFO: Number of nodes with available pods: 0 Feb 1 12:07:59.128: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:00.202: INFO: Number of nodes with available pods: 1 Feb 1 12:08:00.202: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 1 12:08:00.251: INFO: Number of nodes with available pods: 0 Feb 1 12:08:00.251: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:01.347: INFO: Number of nodes with available pods: 0 Feb 1 12:08:01.348: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:02.289: INFO: Number of nodes with available pods: 0 Feb 1 12:08:02.289: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:03.367: INFO: Number of nodes with available pods: 0 Feb 1 12:08:03.368: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:04.272: INFO: Number of nodes with available pods: 0 Feb 1 12:08:04.272: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:05.304: INFO: Number of nodes with available pods: 0 Feb 1 12:08:05.305: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:06.273: INFO: Number of nodes with available pods: 0 Feb 1 12:08:06.274: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:07.274: INFO: Number of nodes with available pods: 0 Feb 1 12:08:07.274: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:08.334: INFO: Number of nodes with available pods: 0 Feb 1 12:08:08.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:09.310: INFO: Number of nodes with available pods: 0 Feb 1 12:08:09.310: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:10.303: INFO: Number of nodes with available pods: 0 Feb 1 12:08:10.303: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:12.358: INFO: Number of nodes with available pods: 0 Feb 1 12:08:12.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:13.282: INFO: Number of nodes with available pods: 0 Feb 1 12:08:13.282: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:14.306: INFO: Number of nodes with available pods: 0 Feb 1 12:08:14.306: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:15.267: INFO: Number of nodes with available pods: 0 Feb 1 12:08:15.267: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:16.295: INFO: Number of nodes with available pods: 0 Feb 1 12:08:16.295: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:08:17.276: INFO: Number of nodes with available pods: 1 Feb 1 12:08:17.276: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zrbfd, will wait for the garbage collector to delete the pods Feb 1 12:08:17.397: INFO: Deleting DaemonSet.extensions daemon-set took: 55.77677ms Feb 1 12:08:17.497: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.333233ms Feb 1 12:08:32.819: INFO: Number of nodes with available pods: 0 Feb 1 12:08:32.819: INFO: Number of running nodes: 0, number of available pods: 0 Feb 1 12:08:32.829: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zrbfd/daemonsets","resourceVersion":"20191385"},"items":null} Feb 1 12:08:32.834: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zrbfd/pods","resourceVersion":"20191385"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:08:32.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zrbfd" for this suite. Feb 1 12:08:39.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:08:39.264: INFO: namespace: e2e-tests-daemonsets-zrbfd, resource: bindings, ignored listing per whitelist Feb 1 12:08:39.286: INFO: namespace e2e-tests-daemonsets-zrbfd deletion completed in 6.420054208s • [SLOW TEST:48.609 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:08:39.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-t9nw STEP: Creating a pod to test atomic-volume-subpath Feb 1 12:08:39.544: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t9nw" in namespace "e2e-tests-subpath-xtbwj" to be "success or failure" Feb 1 12:08:39.567: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.172964ms Feb 1 12:08:41.645: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101171694s Feb 1 12:08:43.783: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239663544s Feb 1 12:08:45.795: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251056343s Feb 1 12:08:47.844: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300629132s Feb 1 12:08:49.885: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.341096161s Feb 1 12:08:51.912: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.367853134s Feb 1 12:08:53.939: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.395028473s Feb 1 12:08:55.956: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 16.412269345s Feb 1 12:08:57.977: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 18.432858727s Feb 1 12:08:59.997: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 20.453714864s Feb 1 12:09:02.028: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 22.484750036s Feb 1 12:09:04.049: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 24.504865684s Feb 1 12:09:06.067: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 26.522834122s Feb 1 12:09:08.088: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 28.544609806s Feb 1 12:09:10.120: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 30.575862143s Feb 1 12:09:12.149: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Running", Reason="", readiness=false. Elapsed: 32.604895061s Feb 1 12:09:14.178: INFO: Pod "pod-subpath-test-configmap-t9nw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.634011016s STEP: Saw pod success Feb 1 12:09:14.178: INFO: Pod "pod-subpath-test-configmap-t9nw" satisfied condition "success or failure" Feb 1 12:09:14.189: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-t9nw container test-container-subpath-configmap-t9nw: STEP: delete the pod Feb 1 12:09:14.325: INFO: Waiting for pod pod-subpath-test-configmap-t9nw to disappear Feb 1 12:09:14.524: INFO: Pod pod-subpath-test-configmap-t9nw no longer exists STEP: Deleting pod pod-subpath-test-configmap-t9nw Feb 1 12:09:14.525: INFO: Deleting pod "pod-subpath-test-configmap-t9nw" in namespace "e2e-tests-subpath-xtbwj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:09:14.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-xtbwj" for this suite. Feb 1 12:09:20.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:09:20.893: INFO: namespace: e2e-tests-subpath-xtbwj, resource: bindings, ignored listing per whitelist Feb 1 12:09:20.957: INFO: namespace e2e-tests-subpath-xtbwj deletion completed in 6.387750419s • [SLOW TEST:41.671 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:09:20.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 1 12:09:21.143: INFO: Waiting up to 5m0s for pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-sdvf7" to be "success or failure" Feb 1 12:09:21.151: INFO: Pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.497248ms Feb 1 12:09:23.170: INFO: Pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027529788s Feb 1 12:09:25.193: INFO: Pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050212913s Feb 1 12:09:27.459: INFO: Pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315900473s Feb 1 12:09:29.469: INFO: Pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326596698s Feb 1 12:09:31.481: INFO: Pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.338615482s STEP: Saw pod success Feb 1 12:09:31.481: INFO: Pod "pod-adbbab09-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:09:31.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-adbbab09-44eb-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 12:09:31.919: INFO: Waiting for pod pod-adbbab09-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:09:31.927: INFO: Pod pod-adbbab09-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:09:31.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sdvf7" for this suite. Feb 1 12:09:38.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:09:38.201: INFO: namespace: e2e-tests-emptydir-sdvf7, resource: bindings, ignored listing per whitelist Feb 1 12:09:38.257: INFO: namespace e2e-tests-emptydir-sdvf7 deletion completed in 6.309773561s • [SLOW TEST:17.300 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:09:38.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 1 12:09:38.510: INFO: Waiting up to 5m0s for pod "pod-b8157b35-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-9wljc" to be "success or failure" Feb 1 12:09:38.712: INFO: Pod "pod-b8157b35-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 201.529126ms Feb 1 12:09:40.731: INFO: Pod "pod-b8157b35-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220995993s Feb 1 12:09:42.756: INFO: Pod "pod-b8157b35-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245701536s Feb 1 12:09:44.767: INFO: Pod "pod-b8157b35-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257316943s Feb 1 12:09:46.813: INFO: Pod "pod-b8157b35-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.302515676s STEP: Saw pod success Feb 1 12:09:46.813: INFO: Pod "pod-b8157b35-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:09:46.817: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b8157b35-44eb-11ea-a88d-0242ac110005 container test-container: STEP: delete the pod Feb 1 12:09:46.924: INFO: Waiting for pod pod-b8157b35-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:09:46.935: INFO: Pod pod-b8157b35-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:09:46.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9wljc" for this suite. Feb 1 12:09:53.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:09:53.105: INFO: namespace: e2e-tests-emptydir-9wljc, resource: bindings, ignored listing per whitelist Feb 1 12:09:53.200: INFO: namespace e2e-tests-emptydir-9wljc deletion completed in 6.252664023s • [SLOW TEST:14.943 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:09:53.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 12:09:53.508: INFO: Creating ReplicaSet my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005 Feb 1 12:09:53.658: INFO: Pod name my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005: Found 0 pods out of 1 Feb 1 12:09:58.691: INFO: Pod name my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005: Found 1 pods out of 1 Feb 1 12:09:58.691: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005" is running Feb 1 12:10:02.729: INFO: Pod "my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005-t9vmx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:09:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:09:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:09:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-01 12:09:53 +0000 UTC Reason: Message:}]) Feb 1 12:10:02.729: INFO: Trying to dial the pod Feb 1 12:10:07.796: INFO: Controller my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005: Got expected result from replica 1 [my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005-t9vmx]: "my-hostname-basic-c10d25be-44eb-11ea-a88d-0242ac110005-t9vmx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:10:07.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-dtktp" for this suite. Feb 1 12:10:13.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:10:14.088: INFO: namespace: e2e-tests-replicaset-dtktp, resource: bindings, ignored listing per whitelist Feb 1 12:10:14.404: INFO: namespace e2e-tests-replicaset-dtktp deletion completed in 6.594573903s • [SLOW TEST:21.203 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:10:14.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-78kt8/configmap-test-cdaceb24-44eb-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 12:10:14.705: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-78kt8" to be "success or failure" Feb 1 12:10:14.844: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 139.122785ms Feb 1 12:10:16.870: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165334606s Feb 1 12:10:19.547: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.841905459s Feb 1 12:10:21.572: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.866522866s Feb 1 12:10:23.591: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.886072617s Feb 1 12:10:25.608: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.903032099s Feb 1 12:10:27.623: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.918105419s STEP: Saw pod success Feb 1 12:10:27.623: INFO: Pod "pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:10:27.629: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005 container env-test: STEP: delete the pod Feb 1 12:10:28.382: INFO: Waiting for pod pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:10:28.404: INFO: Pod pod-configmaps-cdae2dc2-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:10:28.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-78kt8" for this suite. Feb 1 12:10:34.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:10:34.625: INFO: namespace: e2e-tests-configmap-78kt8, resource: bindings, ignored listing per whitelist Feb 1 12:10:34.709: INFO: namespace e2e-tests-configmap-78kt8 deletion completed in 6.283412502s • [SLOW TEST:20.305 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:10:34.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 12:10:34.908: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-2zbtq" to be "success or failure" Feb 1 12:10:34.919: INFO: Pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.922704ms Feb 1 12:10:37.176: INFO: Pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267993318s Feb 1 12:10:39.204: INFO: Pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295851108s Feb 1 12:10:41.562: INFO: Pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654207464s Feb 1 12:10:43.579: INFO: Pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.671562343s Feb 1 12:10:45.596: INFO: Pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.688583208s STEP: Saw pod success Feb 1 12:10:45.596: INFO: Pod "downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:10:45.601: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 12:10:46.154: INFO: Waiting for pod downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:10:46.436: INFO: Pod downwardapi-volume-d9b4b94d-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:10:46.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2zbtq" for this suite. Feb 1 12:10:52.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:10:52.652: INFO: namespace: e2e-tests-downward-api-2zbtq, resource: bindings, ignored listing per whitelist Feb 1 12:10:53.004: INFO: namespace e2e-tests-downward-api-2zbtq deletion completed in 6.549964321s • [SLOW TEST:18.295 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:10:53.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 12:10:53.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-vhdbs" to be "success or failure" Feb 1 12:10:53.243: INFO: Pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.339392ms Feb 1 12:10:55.255: INFO: Pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033680802s Feb 1 12:10:57.270: INFO: Pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049096045s Feb 1 12:10:59.346: INFO: Pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124868521s Feb 1 12:11:01.367: INFO: Pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146294338s Feb 1 12:11:03.419: INFO: Pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.198006996s STEP: Saw pod success Feb 1 12:11:03.419: INFO: Pod "downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:11:03.427: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 12:11:03.585: INFO: Waiting for pod downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:11:03.604: INFO: Pod downwardapi-volume-e4a1ac03-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:11:03.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vhdbs" for this suite. Feb 1 12:11:09.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:11:09.925: INFO: namespace: e2e-tests-downward-api-vhdbs, resource: bindings, ignored listing per whitelist Feb 1 12:11:10.003: INFO: namespace e2e-tests-downward-api-vhdbs deletion completed in 6.327496372s • [SLOW TEST:16.998 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:11:10.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 12:11:10.249: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:11:11.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-xw9qs" for this suite. Feb 1 12:11:17.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:11:17.652: INFO: namespace: e2e-tests-custom-resource-definition-xw9qs, resource: bindings, ignored listing per whitelist Feb 1 12:11:17.689: INFO: namespace e2e-tests-custom-resource-definition-xw9qs deletion completed in 6.206049768s • [SLOW TEST:7.686 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:11:17.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 12:11:18.026: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-gtmhd" to be "success or failure" Feb 1 12:11:18.066: INFO: Pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.022411ms Feb 1 12:11:20.086: INFO: Pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059834507s Feb 1 12:11:22.092: INFO: Pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065290886s Feb 1 12:11:24.156: INFO: Pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129542969s Feb 1 12:11:26.170: INFO: Pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143481996s Feb 1 12:11:28.185: INFO: Pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15883882s STEP: Saw pod success Feb 1 12:11:28.185: INFO: Pod "downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:11:28.197: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 12:11:28.286: INFO: Waiting for pod downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005 to disappear Feb 1 12:11:28.366: INFO: Pod downwardapi-volume-f369812d-44eb-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:11:28.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gtmhd" for this suite. Feb 1 12:11:36.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:11:36.702: INFO: namespace: e2e-tests-projected-gtmhd, resource: bindings, ignored listing per whitelist Feb 1 12:11:36.720: INFO: namespace e2e-tests-projected-gtmhd deletion completed in 8.344129826s • [SLOW TEST:19.030 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:11:36.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-26b6c [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 1 12:11:36.963: INFO: Found 0 stateful pods, waiting for 3 Feb 1 12:11:46.980: INFO: Found 2 stateful pods, waiting for 3 Feb 1 12:11:56.980: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:11:56.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:11:56.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 1 12:12:07.012: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:12:07.012: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:12:07.012: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 1 12:12:07.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26b6c ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 12:12:07.768: INFO: stderr: "I0201 12:12:07.261957 2022 log.go:172] (0xc00072e370) (0xc000752640) Create stream\nI0201 12:12:07.262158 2022 log.go:172] (0xc00072e370) (0xc000752640) Stream added, broadcasting: 1\nI0201 12:12:07.269913 2022 log.go:172] (0xc00072e370) Reply frame received for 1\nI0201 12:12:07.269965 2022 log.go:172] (0xc00072e370) (0xc0005e2c80) Create stream\nI0201 12:12:07.269982 2022 log.go:172] (0xc00072e370) (0xc0005e2c80) Stream added, broadcasting: 3\nI0201 12:12:07.272960 2022 log.go:172] (0xc00072e370) Reply frame received for 3\nI0201 12:12:07.273217 2022 log.go:172] (0xc00072e370) (0xc0003c2000) Create stream\nI0201 12:12:07.273246 2022 log.go:172] (0xc00072e370) (0xc0003c2000) Stream added, broadcasting: 5\nI0201 12:12:07.275279 2022 log.go:172] (0xc00072e370) Reply frame received for 5\nI0201 12:12:07.620026 2022 log.go:172] (0xc00072e370) Data frame received for 3\nI0201 12:12:07.620118 2022 log.go:172] (0xc0005e2c80) (3) Data frame handling\nI0201 12:12:07.620140 2022 log.go:172] (0xc0005e2c80) (3) Data frame sent\nI0201 12:12:07.748196 2022 log.go:172] (0xc00072e370) Data frame received for 1\nI0201 12:12:07.748387 2022 log.go:172] (0xc00072e370) (0xc0005e2c80) Stream removed, broadcasting: 3\nI0201 12:12:07.748477 2022 log.go:172] (0xc000752640) (1) Data frame handling\nI0201 12:12:07.748521 2022 log.go:172] (0xc000752640) (1) Data frame sent\nI0201 12:12:07.748542 2022 log.go:172] (0xc00072e370) (0xc0003c2000) Stream removed, broadcasting: 5\nI0201 12:12:07.748569 2022 log.go:172] (0xc00072e370) (0xc000752640) Stream removed, broadcasting: 1\nI0201 12:12:07.748587 2022 log.go:172] (0xc00072e370) Go away received\nI0201 12:12:07.749694 2022 log.go:172] (0xc00072e370) (0xc000752640) Stream removed, broadcasting: 1\nI0201 12:12:07.749764 2022 log.go:172] (0xc00072e370) (0xc0005e2c80) Stream removed, broadcasting: 3\nI0201 12:12:07.749787 2022 log.go:172] (0xc00072e370) (0xc0003c2000) Stream removed, broadcasting: 5\n" Feb 1 12:12:07.768: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 12:12:07.768: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 1 12:12:08.087: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 1 12:12:18.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26b6c ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:12:18.959: INFO: stderr: "I0201 12:12:18.400717 2045 log.go:172] (0xc0006c42c0) (0xc0003512c0) Create stream\nI0201 12:12:18.401000 2045 log.go:172] (0xc0006c42c0) (0xc0003512c0) Stream added, broadcasting: 1\nI0201 12:12:18.407835 2045 log.go:172] (0xc0006c42c0) Reply frame received for 1\nI0201 12:12:18.407973 2045 log.go:172] (0xc0006c42c0) (0xc000351360) Create stream\nI0201 12:12:18.407992 2045 log.go:172] (0xc0006c42c0) (0xc000351360) Stream added, broadcasting: 3\nI0201 12:12:18.410208 2045 log.go:172] (0xc0006c42c0) Reply frame received for 3\nI0201 12:12:18.410239 2045 log.go:172] (0xc0006c42c0) (0xc00070c000) Create stream\nI0201 12:12:18.410245 2045 log.go:172] (0xc0006c42c0) (0xc00070c000) Stream added, broadcasting: 5\nI0201 12:12:18.411761 2045 log.go:172] (0xc0006c42c0) Reply frame received for 5\nI0201 12:12:18.661545 2045 log.go:172] (0xc0006c42c0) Data frame received for 3\nI0201 12:12:18.661672 2045 log.go:172] (0xc000351360) (3) Data frame handling\nI0201 12:12:18.661701 2045 log.go:172] (0xc000351360) (3) Data frame sent\nI0201 12:12:18.946707 2045 log.go:172] (0xc0006c42c0) Data frame received for 1\nI0201 12:12:18.946974 2045 log.go:172] (0xc0006c42c0) (0xc000351360) Stream removed, broadcasting: 3\nI0201 12:12:18.947026 2045 log.go:172] (0xc0003512c0) (1) Data frame handling\nI0201 12:12:18.947093 2045 log.go:172] (0xc0003512c0) (1) Data frame sent\nI0201 12:12:18.947102 2045 log.go:172] (0xc0006c42c0) (0xc0003512c0) Stream removed, broadcasting: 1\nI0201 12:12:18.947201 2045 log.go:172] (0xc0006c42c0) (0xc00070c000) Stream removed, broadcasting: 5\nI0201 12:12:18.947333 2045 log.go:172] (0xc0006c42c0) Go away received\nI0201 12:12:18.947886 2045 log.go:172] (0xc0006c42c0) (0xc0003512c0) Stream removed, broadcasting: 1\nI0201 12:12:18.947938 2045 log.go:172] (0xc0006c42c0) (0xc000351360) Stream removed, broadcasting: 3\nI0201 12:12:18.947963 2045 log.go:172] (0xc0006c42c0) (0xc00070c000) Stream removed, broadcasting: 5\n" Feb 1 12:12:18.960: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 12:12:18.960: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 12:12:19.488: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:12:19.488: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:19.488: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:19.488: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:29.528: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:12:29.529: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:29.529: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:39.509: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:12:39.509: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:39.509: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:49.506: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:12:49.506: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 1 12:12:59.513: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update STEP: Rolling back to a previous revision Feb 1 12:13:09.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26b6c ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 1 12:13:10.186: INFO: stderr: "I0201 12:13:09.766242 2067 log.go:172] (0xc00074a370) (0xc0007e8640) Create stream\nI0201 12:13:09.766671 2067 log.go:172] (0xc00074a370) (0xc0007e8640) Stream added, broadcasting: 1\nI0201 12:13:09.773868 2067 log.go:172] (0xc00074a370) Reply frame received for 1\nI0201 12:13:09.773924 2067 log.go:172] (0xc00074a370) (0xc00067ad20) Create stream\nI0201 12:13:09.773940 2067 log.go:172] (0xc00074a370) (0xc00067ad20) Stream added, broadcasting: 3\nI0201 12:13:09.775117 2067 log.go:172] (0xc00074a370) Reply frame received for 3\nI0201 12:13:09.775137 2067 log.go:172] (0xc00074a370) (0xc00067ae60) Create stream\nI0201 12:13:09.775144 2067 log.go:172] (0xc00074a370) (0xc00067ae60) Stream added, broadcasting: 5\nI0201 12:13:09.776574 2067 log.go:172] (0xc00074a370) Reply frame received for 5\nI0201 12:13:10.030780 2067 log.go:172] (0xc00074a370) Data frame received for 3\nI0201 12:13:10.030872 2067 log.go:172] (0xc00067ad20) (3) Data frame handling\nI0201 12:13:10.030894 2067 log.go:172] (0xc00067ad20) (3) Data frame sent\nI0201 12:13:10.170541 2067 log.go:172] (0xc00074a370) Data frame received for 1\nI0201 12:13:10.170774 2067 log.go:172] (0xc00074a370) (0xc00067ad20) Stream removed, broadcasting: 3\nI0201 12:13:10.170948 2067 log.go:172] (0xc0007e8640) (1) Data frame handling\nI0201 12:13:10.170990 2067 log.go:172] (0xc0007e8640) (1) Data frame sent\nI0201 12:13:10.171130 2067 log.go:172] (0xc00074a370) (0xc00067ae60) Stream removed, broadcasting: 5\nI0201 12:13:10.171203 2067 log.go:172] (0xc00074a370) (0xc0007e8640) Stream removed, broadcasting: 1\nI0201 12:13:10.171240 2067 log.go:172] (0xc00074a370) Go away received\nI0201 12:13:10.172099 2067 log.go:172] (0xc00074a370) (0xc0007e8640) Stream removed, broadcasting: 1\nI0201 12:13:10.172121 2067 log.go:172] (0xc00074a370) (0xc00067ad20) Stream removed, broadcasting: 3\nI0201 12:13:10.172150 2067 log.go:172] (0xc00074a370) (0xc00067ae60) Stream removed, broadcasting: 5\n" Feb 1 12:13:10.186: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 1 12:13:10.186: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 1 12:13:20.315: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 1 12:13:30.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-26b6c ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 1 12:13:31.019: INFO: stderr: "I0201 12:13:30.630030 2090 log.go:172] (0xc000712370) (0xc00072e640) Create stream\nI0201 12:13:30.630280 2090 log.go:172] (0xc000712370) (0xc00072e640) Stream added, broadcasting: 1\nI0201 12:13:30.635851 2090 log.go:172] (0xc000712370) Reply frame received for 1\nI0201 12:13:30.635875 2090 log.go:172] (0xc000712370) (0xc00066edc0) Create stream\nI0201 12:13:30.635885 2090 log.go:172] (0xc000712370) (0xc00066edc0) Stream added, broadcasting: 3\nI0201 12:13:30.636638 2090 log.go:172] (0xc000712370) Reply frame received for 3\nI0201 12:13:30.636679 2090 log.go:172] (0xc000712370) (0xc0001b8000) Create stream\nI0201 12:13:30.636694 2090 log.go:172] (0xc000712370) (0xc0001b8000) Stream added, broadcasting: 5\nI0201 12:13:30.637769 2090 log.go:172] (0xc000712370) Reply frame received for 5\nI0201 12:13:30.873898 2090 log.go:172] (0xc000712370) Data frame received for 3\nI0201 12:13:30.874077 2090 log.go:172] (0xc00066edc0) (3) Data frame handling\nI0201 12:13:30.874125 2090 log.go:172] (0xc00066edc0) (3) Data frame sent\nI0201 12:13:31.010601 2090 log.go:172] (0xc000712370) Data frame received for 1\nI0201 12:13:31.010753 2090 log.go:172] (0xc000712370) (0xc0001b8000) Stream removed, broadcasting: 5\nI0201 12:13:31.010814 2090 log.go:172] (0xc00072e640) (1) Data frame handling\nI0201 12:13:31.010843 2090 log.go:172] (0xc00072e640) (1) Data frame sent\nI0201 12:13:31.010923 2090 log.go:172] (0xc000712370) (0xc00072e640) Stream removed, broadcasting: 1\nI0201 12:13:31.010956 2090 log.go:172] (0xc000712370) (0xc00066edc0) Stream removed, broadcasting: 3\nI0201 12:13:31.010974 2090 log.go:172] (0xc000712370) Go away received\nI0201 12:13:31.011546 2090 log.go:172] (0xc000712370) (0xc00072e640) Stream removed, broadcasting: 1\nI0201 12:13:31.011567 2090 log.go:172] (0xc000712370) (0xc00066edc0) Stream removed, broadcasting: 3\nI0201 12:13:31.011582 2090 log.go:172] (0xc000712370) (0xc0001b8000) Stream removed, broadcasting: 5\n" Feb 1 12:13:31.019: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 1 12:13:31.019: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 1 12:13:41.073: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:13:41.073: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 12:13:41.073: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 12:13:51.097: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:13:51.097: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 12:13:51.097: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 12:14:01.101: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:14:01.101: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 12:14:01.101: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 12:14:11.090: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update Feb 1 12:14:11.090: INFO: Waiting for Pod e2e-tests-statefulset-26b6c/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 1 12:14:21.107: INFO: Waiting for StatefulSet e2e-tests-statefulset-26b6c/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 1 12:14:31.139: INFO: Deleting all statefulset in ns e2e-tests-statefulset-26b6c Feb 1 12:14:31.148: INFO: Scaling statefulset ss2 to 0 Feb 1 12:14:51.201: INFO: Waiting for statefulset status.replicas updated to 0 Feb 1 12:14:51.210: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:14:51.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-26b6c" for this suite. Feb 1 12:14:59.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:14:59.486: INFO: namespace: e2e-tests-statefulset-26b6c, resource: bindings, ignored listing per whitelist Feb 1 12:14:59.555: INFO: namespace e2e-tests-statefulset-26b6c deletion completed in 8.249589865s • [SLOW TEST:202.835 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:14:59.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-77ac9af2-44ec-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 12:14:59.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-d9f9r" to be "success or failure" Feb 1 12:14:59.944: INFO: Pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.039128ms Feb 1 12:15:02.378: INFO: Pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449455746s Feb 1 12:15:04.403: INFO: Pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474311274s Feb 1 12:15:06.656: INFO: Pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727374514s Feb 1 12:15:08.726: INFO: Pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797021887s Feb 1 12:15:10.972: INFO: Pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.043404411s STEP: Saw pod success Feb 1 12:15:10.972: INFO: Pod "pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:15:10.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 1 12:15:11.246: INFO: Waiting for pod pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005 to disappear Feb 1 12:15:11.253: INFO: Pod pod-projected-configmaps-77ae10ca-44ec-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:15:11.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d9f9r" for this suite. Feb 1 12:15:17.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:15:17.648: INFO: namespace: e2e-tests-projected-d9f9r, resource: bindings, ignored listing per whitelist Feb 1 12:15:17.654: INFO: namespace e2e-tests-projected-d9f9r deletion completed in 6.39213715s • [SLOW TEST:18.099 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:15:17.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-g9qg STEP: Creating a pod to test atomic-volume-subpath Feb 1 12:15:17.907: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-g9qg" in namespace "e2e-tests-subpath-tpvwc" to be "success or failure" Feb 1 12:15:17.915: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204829ms Feb 1 12:15:19.932: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025077362s Feb 1 12:15:21.948: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040869661s Feb 1 12:15:23.962: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054750127s Feb 1 12:15:25.982: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07524259s Feb 1 12:15:28.143: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.235364013s Feb 1 12:15:30.358: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.450385437s Feb 1 12:15:32.372: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464703669s Feb 1 12:15:34.441: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 16.534280823s Feb 1 12:15:36.511: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 18.603854139s Feb 1 12:15:38.553: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 20.64629661s Feb 1 12:15:40.611: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 22.703546508s Feb 1 12:15:42.642: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 24.735213426s Feb 1 12:15:44.660: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 26.752793136s Feb 1 12:15:46.677: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 28.77020479s Feb 1 12:15:48.735: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 30.828178676s Feb 1 12:15:50.746: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Running", Reason="", readiness=false. Elapsed: 32.838648296s Feb 1 12:15:52.757: INFO: Pod "pod-subpath-test-configmap-g9qg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.849753536s STEP: Saw pod success Feb 1 12:15:52.757: INFO: Pod "pod-subpath-test-configmap-g9qg" satisfied condition "success or failure" Feb 1 12:15:52.762: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-g9qg container test-container-subpath-configmap-g9qg: STEP: delete the pod Feb 1 12:15:53.452: INFO: Waiting for pod pod-subpath-test-configmap-g9qg to disappear Feb 1 12:15:53.754: INFO: Pod pod-subpath-test-configmap-g9qg no longer exists STEP: Deleting pod pod-subpath-test-configmap-g9qg Feb 1 12:15:53.754: INFO: Deleting pod "pod-subpath-test-configmap-g9qg" in namespace "e2e-tests-subpath-tpvwc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:15:53.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-tpvwc" for this suite. Feb 1 12:15:59.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:16:00.041: INFO: namespace: e2e-tests-subpath-tpvwc, resource: bindings, ignored listing per whitelist Feb 1 12:16:00.288: INFO: namespace e2e-tests-subpath-tpvwc deletion completed in 6.483916678s • [SLOW TEST:42.633 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:16:00.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 1 12:16:00.782: INFO: Number of nodes with available pods: 0 Feb 1 12:16:00.782: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:01.812: INFO: Number of nodes with available pods: 0 Feb 1 12:16:01.812: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:02.802: INFO: Number of nodes with available pods: 0 Feb 1 12:16:02.802: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:03.814: INFO: Number of nodes with available pods: 0 Feb 1 12:16:03.814: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:04.810: INFO: Number of nodes with available pods: 0 Feb 1 12:16:04.810: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:06.158: INFO: Number of nodes with available pods: 0 Feb 1 12:16:06.158: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:06.953: INFO: Number of nodes with available pods: 0 Feb 1 12:16:06.953: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:07.802: INFO: Number of nodes with available pods: 0 Feb 1 12:16:07.802: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:08.806: INFO: Number of nodes with available pods: 1 Feb 1 12:16:08.806: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 1 12:16:09.020: INFO: Number of nodes with available pods: 0 Feb 1 12:16:09.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:10.383: INFO: Number of nodes with available pods: 0 Feb 1 12:16:10.383: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:11.418: INFO: Number of nodes with available pods: 0 Feb 1 12:16:11.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:12.048: INFO: Number of nodes with available pods: 0 Feb 1 12:16:12.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:13.040: INFO: Number of nodes with available pods: 0 Feb 1 12:16:13.040: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:14.041: INFO: Number of nodes with available pods: 0 Feb 1 12:16:14.041: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:15.268: INFO: Number of nodes with available pods: 0 Feb 1 12:16:15.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:16.043: INFO: Number of nodes with available pods: 0 Feb 1 12:16:16.043: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:17.067: INFO: Number of nodes with available pods: 0 Feb 1 12:16:17.067: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:18.043: INFO: Number of nodes with available pods: 0 Feb 1 12:16:18.043: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 1 12:16:19.032: INFO: Number of nodes with available pods: 1 Feb 1 12:16:19.032: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-hrslv, will wait for the garbage collector to delete the pods Feb 1 12:16:19.129: INFO: Deleting DaemonSet.extensions daemon-set took: 30.321647ms Feb 1 12:16:19.430: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.882618ms Feb 1 12:16:32.853: INFO: Number of nodes with available pods: 0 Feb 1 12:16:32.853: INFO: Number of running nodes: 0, number of available pods: 0 Feb 1 12:16:32.869: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hrslv/daemonsets","resourceVersion":"20192654"},"items":null} Feb 1 12:16:32.875: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hrslv/pods","resourceVersion":"20192654"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:16:32.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-hrslv" for this suite. Feb 1 12:16:39.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:16:39.092: INFO: namespace: e2e-tests-daemonsets-hrslv, resource: bindings, ignored listing per whitelist Feb 1 12:16:39.302: INFO: namespace e2e-tests-daemonsets-hrslv deletion completed in 6.400659206s • [SLOW TEST:39.013 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:16:39.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 1 12:16:39.541: INFO: namespace e2e-tests-kubectl-k8rqg Feb 1 12:16:39.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k8rqg' Feb 1 12:16:41.462: INFO: stderr: "" Feb 1 12:16:41.463: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 1 12:16:43.040: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:43.040: INFO: Found 0 / 1 Feb 1 12:16:43.540: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:43.540: INFO: Found 0 / 1 Feb 1 12:16:44.503: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:44.503: INFO: Found 0 / 1 Feb 1 12:16:45.478: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:45.478: INFO: Found 0 / 1 Feb 1 12:16:47.292: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:47.292: INFO: Found 0 / 1 Feb 1 12:16:47.729: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:47.729: INFO: Found 0 / 1 Feb 1 12:16:48.861: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:48.861: INFO: Found 0 / 1 Feb 1 12:16:49.469: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:49.469: INFO: Found 0 / 1 Feb 1 12:16:50.487: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:50.488: INFO: Found 1 / 1 Feb 1 12:16:50.488: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 1 12:16:50.499: INFO: Selector matched 1 pods for map[app:redis] Feb 1 12:16:50.499: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 1 12:16:50.499: INFO: wait on redis-master startup in e2e-tests-kubectl-k8rqg Feb 1 12:16:50.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gpsd9 redis-master --namespace=e2e-tests-kubectl-k8rqg' Feb 1 12:16:50.733: INFO: stderr: "" Feb 1 12:16:50.733: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Feb 12:16:49.629 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Feb 12:16:49.629 # Server started, Redis version 3.2.12\n1:M 01 Feb 12:16:49.629 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Feb 12:16:49.629 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 1 12:16:50.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-k8rqg' Feb 1 12:16:50.919: INFO: stderr: "" Feb 1 12:16:50.919: INFO: stdout: "service/rm2 exposed\n" Feb 1 12:16:50.929: INFO: Service rm2 in namespace e2e-tests-kubectl-k8rqg found. STEP: exposing service Feb 1 12:16:52.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-k8rqg' Feb 1 12:16:53.203: INFO: stderr: "" Feb 1 12:16:53.203: INFO: stdout: "service/rm3 exposed\n" Feb 1 12:16:53.254: INFO: Service rm3 in namespace e2e-tests-kubectl-k8rqg found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:16:55.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k8rqg" for this suite. Feb 1 12:17:21.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:17:21.433: INFO: namespace: e2e-tests-kubectl-k8rqg, resource: bindings, ignored listing per whitelist Feb 1 12:17:21.508: INFO: namespace e2e-tests-kubectl-k8rqg deletion completed in 26.221439377s • [SLOW TEST:42.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:17:21.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 1 12:17:21.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-jzl4p" to be "success or failure" Feb 1 12:17:21.756: INFO: Pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.827452ms Feb 1 12:17:23.768: INFO: Pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017999562s Feb 1 12:17:25.783: INFO: Pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033568603s Feb 1 12:17:27.938: INFO: Pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188301645s Feb 1 12:17:30.174: INFO: Pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.423746976s Feb 1 12:17:32.291: INFO: Pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.541601622s STEP: Saw pod success Feb 1 12:17:32.292: INFO: Pod "downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:17:32.301: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005 container client-container: STEP: delete the pod Feb 1 12:17:32.450: INFO: Waiting for pod downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005 to disappear Feb 1 12:17:32.484: INFO: Pod downwardapi-volume-cc37e718-44ec-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:17:32.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jzl4p" for this suite. Feb 1 12:17:40.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:17:40.828: INFO: namespace: e2e-tests-projected-jzl4p, resource: bindings, ignored listing per whitelist Feb 1 12:17:40.855: INFO: namespace e2e-tests-projected-jzl4p deletion completed in 8.348301599s • [SLOW TEST:19.347 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:17:40.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d7b0b95a-44ec-11ea-a88d-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 1 12:17:41.067: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-7nghw" to be "success or failure" Feb 1 12:17:41.119: INFO: Pod "pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.05254ms Feb 1 12:17:43.133: INFO: Pod "pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066293767s Feb 1 12:17:45.668: INFO: Pod "pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601429175s Feb 1 12:17:47.689: INFO: Pod "pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621872144s Feb 1 12:17:49.702: INFO: Pod "pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.634711549s STEP: Saw pod success Feb 1 12:17:49.702: INFO: Pod "pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005" satisfied condition "success or failure" Feb 1 12:17:49.710: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 1 12:17:49.932: INFO: Waiting for pod pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005 to disappear Feb 1 12:17:50.106: INFO: Pod pod-projected-configmaps-d7b1896b-44ec-11ea-a88d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 1 12:17:50.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7nghw" for this suite. Feb 1 12:17:56.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 1 12:17:56.307: INFO: namespace: e2e-tests-projected-7nghw, resource: bindings, ignored listing per whitelist Feb 1 12:17:56.374: INFO: namespace e2e-tests-projected-7nghw deletion completed in 6.245641251s • [SLOW TEST:15.519 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 1 12:17:56.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 1 12:17:56.757: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 11.456824ms)
Feb  1 12:17:56.766: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.642077ms)
Feb  1 12:17:56.771: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.575419ms)
Feb  1 12:17:56.778: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.903069ms)
Feb  1 12:17:56.785: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.361068ms)
Feb  1 12:17:56.790: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.66062ms)
Feb  1 12:17:56.795: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.594609ms)
Feb  1 12:17:56.801: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.323502ms)
Feb  1 12:17:56.809: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.083855ms)
Feb  1 12:17:56.814: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.173938ms)
Feb  1 12:17:56.857: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 42.908284ms)
Feb  1 12:17:56.864: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.102809ms)
Feb  1 12:17:56.870: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.00994ms)
Feb  1 12:17:56.875: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.698914ms)
Feb  1 12:17:56.880: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.088882ms)
Feb  1 12:17:56.885: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.93452ms)
Feb  1 12:17:56.890: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.222856ms)
Feb  1 12:17:56.895: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.63159ms)
Feb  1 12:17:56.899: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.442616ms)
Feb  1 12:17:56.904: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.194011ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:17:56.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-2j5mx" for this suite.
Feb  1 12:18:02.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:18:03.044: INFO: namespace: e2e-tests-proxy-2j5mx, resource: bindings, ignored listing per whitelist
Feb  1 12:18:03.096: INFO: namespace e2e-tests-proxy-2j5mx deletion completed in 6.18663117s

• [SLOW TEST:6.721 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:18:03.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-e4f2aec9-44ec-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:18:03.251: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-k5b99" to be "success or failure"
Feb  1 12:18:03.414: INFO: Pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 163.412115ms
Feb  1 12:18:05.530: INFO: Pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279508398s
Feb  1 12:18:07.544: INFO: Pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293504679s
Feb  1 12:18:09.567: INFO: Pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.316241214s
Feb  1 12:18:11.714: INFO: Pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463371486s
Feb  1 12:18:13.725: INFO: Pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.474329432s
STEP: Saw pod success
Feb  1 12:18:13.725: INFO: Pod "pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:18:13.735: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  1 12:18:14.521: INFO: Waiting for pod pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005 to disappear
Feb  1 12:18:14.531: INFO: Pod pod-projected-secrets-e4f40052-44ec-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:18:14.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k5b99" for this suite.
Feb  1 12:18:20.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:18:20.846: INFO: namespace: e2e-tests-projected-k5b99, resource: bindings, ignored listing per whitelist
Feb  1 12:18:20.948: INFO: namespace e2e-tests-projected-k5b99 deletion completed in 6.348082273s

• [SLOW TEST:17.852 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:18:20.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-efa715eb-44ec-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:18:21.265: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-zbpwf" to be "success or failure"
Feb  1 12:18:21.286: INFO: Pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.648479ms
Feb  1 12:18:23.395: INFO: Pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129577353s
Feb  1 12:18:25.405: INFO: Pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139533929s
Feb  1 12:18:27.768: INFO: Pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.502984688s
Feb  1 12:18:30.433: INFO: Pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.167626858s
Feb  1 12:18:32.448: INFO: Pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.18319417s
STEP: Saw pod success
Feb  1 12:18:32.448: INFO: Pod "pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:18:32.454: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  1 12:18:33.034: INFO: Waiting for pod pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005 to disappear
Feb  1 12:18:33.059: INFO: Pod pod-projected-secrets-efa91b82-44ec-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:18:33.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zbpwf" for this suite.
Feb  1 12:18:39.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:18:39.511: INFO: namespace: e2e-tests-projected-zbpwf, resource: bindings, ignored listing per whitelist
Feb  1 12:18:39.771: INFO: namespace e2e-tests-projected-zbpwf deletion completed in 6.349322542s

• [SLOW TEST:18.822 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:18:39.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jqspl
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  1 12:18:40.189: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  1 12:19:10.662: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-jqspl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  1 12:19:10.663: INFO: >>> kubeConfig: /root/.kube/config
I0201 12:19:10.722327       8 log.go:172] (0xc0029b82c0) (0xc0012f5c20) Create stream
I0201 12:19:10.722411       8 log.go:172] (0xc0029b82c0) (0xc0012f5c20) Stream added, broadcasting: 1
I0201 12:19:10.728720       8 log.go:172] (0xc0029b82c0) Reply frame received for 1
I0201 12:19:10.728758       8 log.go:172] (0xc0029b82c0) (0xc0012f5cc0) Create stream
I0201 12:19:10.728772       8 log.go:172] (0xc0029b82c0) (0xc0012f5cc0) Stream added, broadcasting: 3
I0201 12:19:10.729834       8 log.go:172] (0xc0029b82c0) Reply frame received for 3
I0201 12:19:10.729902       8 log.go:172] (0xc0029b82c0) (0xc0028a75e0) Create stream
I0201 12:19:10.729918       8 log.go:172] (0xc0029b82c0) (0xc0028a75e0) Stream added, broadcasting: 5
I0201 12:19:10.730771       8 log.go:172] (0xc0029b82c0) Reply frame received for 5
I0201 12:19:10.992103       8 log.go:172] (0xc0029b82c0) Data frame received for 3
I0201 12:19:10.992165       8 log.go:172] (0xc0012f5cc0) (3) Data frame handling
I0201 12:19:10.992199       8 log.go:172] (0xc0012f5cc0) (3) Data frame sent
I0201 12:19:11.158870       8 log.go:172] (0xc0029b82c0) (0xc0012f5cc0) Stream removed, broadcasting: 3
I0201 12:19:11.158981       8 log.go:172] (0xc0029b82c0) (0xc0028a75e0) Stream removed, broadcasting: 5
I0201 12:19:11.159059       8 log.go:172] (0xc0029b82c0) Data frame received for 1
I0201 12:19:11.159070       8 log.go:172] (0xc0012f5c20) (1) Data frame handling
I0201 12:19:11.159106       8 log.go:172] (0xc0012f5c20) (1) Data frame sent
I0201 12:19:11.159124       8 log.go:172] (0xc0029b82c0) (0xc0012f5c20) Stream removed, broadcasting: 1
I0201 12:19:11.159137       8 log.go:172] (0xc0029b82c0) Go away received
I0201 12:19:11.159737       8 log.go:172] (0xc0029b82c0) (0xc0012f5c20) Stream removed, broadcasting: 1
I0201 12:19:11.159763       8 log.go:172] (0xc0029b82c0) (0xc0012f5cc0) Stream removed, broadcasting: 3
I0201 12:19:11.159786       8 log.go:172] (0xc0029b82c0) (0xc0028a75e0) Stream removed, broadcasting: 5
Feb  1 12:19:11.159: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:19:11.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-jqspl" for this suite.
Feb  1 12:19:37.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:19:37.370: INFO: namespace: e2e-tests-pod-network-test-jqspl, resource: bindings, ignored listing per whitelist
Feb  1 12:19:37.399: INFO: namespace e2e-tests-pod-network-test-jqspl deletion completed in 26.224766817s

• [SLOW TEST:57.628 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:19:37.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  1 12:19:37.574: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  1 12:19:37.583: INFO: Waiting for terminating namespaces to be deleted...
Feb  1 12:19:37.586: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  1 12:19:37.603: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  1 12:19:37.603: INFO: 	Container coredns ready: true, restart count 0
Feb  1 12:19:37.603: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  1 12:19:37.603: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  1 12:19:37.603: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:19:37.603: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  1 12:19:37.603: INFO: 	Container weave ready: true, restart count 0
Feb  1 12:19:37.603: INFO: 	Container weave-npc ready: true, restart count 0
Feb  1 12:19:37.603: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  1 12:19:37.603: INFO: 	Container coredns ready: true, restart count 0
Feb  1 12:19:37.603: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:19:37.603: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:19:37.603: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-233fc702-44ed-11ea-a88d-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-233fc702-44ed-11ea-a88d-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-233fc702-44ed-11ea-a88d-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:19:58.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-tj2jj" for this suite.
Feb  1 12:20:14.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:20:14.339: INFO: namespace: e2e-tests-sched-pred-tj2jj, resource: bindings, ignored listing per whitelist
Feb  1 12:20:14.417: INFO: namespace e2e-tests-sched-pred-tj2jj deletion completed in 16.185579921s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.018 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:20:14.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-9hrj
STEP: Creating a pod to test atomic-volume-subpath
Feb  1 12:20:14.656: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9hrj" in namespace "e2e-tests-subpath-vljwx" to be "success or failure"
Feb  1 12:20:14.662: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374217ms
Feb  1 12:20:16.674: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018190665s
Feb  1 12:20:18.686: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030201019s
Feb  1 12:20:20.711: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055174646s
Feb  1 12:20:23.027: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371336596s
Feb  1 12:20:25.043: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.387652683s
Feb  1 12:20:27.058: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.401710971s
Feb  1 12:20:29.548: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.89189572s
Feb  1 12:20:31.580: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 16.924349744s
Feb  1 12:20:33.597: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 18.940780184s
Feb  1 12:20:35.615: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 20.959239235s
Feb  1 12:20:37.624: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 22.968007637s
Feb  1 12:20:39.641: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 24.985350866s
Feb  1 12:20:41.656: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 27.000405579s
Feb  1 12:20:43.673: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 29.017585535s
Feb  1 12:20:45.688: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 31.032091819s
Feb  1 12:20:47.700: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Running", Reason="", readiness=false. Elapsed: 33.044213602s
Feb  1 12:20:49.714: INFO: Pod "pod-subpath-test-downwardapi-9hrj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.05773886s
STEP: Saw pod success
Feb  1 12:20:49.714: INFO: Pod "pod-subpath-test-downwardapi-9hrj" satisfied condition "success or failure"
Feb  1 12:20:49.743: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-9hrj container test-container-subpath-downwardapi-9hrj: 
STEP: delete the pod
Feb  1 12:20:50.006: INFO: Waiting for pod pod-subpath-test-downwardapi-9hrj to disappear
Feb  1 12:20:50.026: INFO: Pod pod-subpath-test-downwardapi-9hrj no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-9hrj
Feb  1 12:20:50.026: INFO: Deleting pod "pod-subpath-test-downwardapi-9hrj" in namespace "e2e-tests-subpath-vljwx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:20:50.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vljwx" for this suite.
Feb  1 12:20:56.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:20:56.174: INFO: namespace: e2e-tests-subpath-vljwx, resource: bindings, ignored listing per whitelist
Feb  1 12:20:56.262: INFO: namespace e2e-tests-subpath-vljwx deletion completed in 6.21847729s

• [SLOW TEST:41.844 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:20:56.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  1 12:20:56.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4cdnp'
Feb  1 12:20:56.708: INFO: stderr: ""
Feb  1 12:20:56.708: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb  1 12:20:56.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4cdnp'
Feb  1 12:21:02.706: INFO: stderr: ""
Feb  1 12:21:02.706: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:21:02.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4cdnp" for this suite.
Feb  1 12:21:08.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:21:08.915: INFO: namespace: e2e-tests-kubectl-4cdnp, resource: bindings, ignored listing per whitelist
Feb  1 12:21:09.049: INFO: namespace e2e-tests-kubectl-4cdnp deletion completed in 6.330679805s

• [SLOW TEST:12.788 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:21:09.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  1 12:21:09.365: INFO: Creating deployment "nginx-deployment"
Feb  1 12:21:09.379: INFO: Waiting for observed generation 1
Feb  1 12:21:11.535: INFO: Waiting for all required pods to come up
Feb  1 12:21:11.603: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  1 12:21:49.463: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  1 12:21:49.552: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  1 12:21:49.566: INFO: Updating deployment nginx-deployment
Feb  1 12:21:49.566: INFO: Waiting for observed generation 2
Feb  1 12:21:52.833: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  1 12:21:52.854: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  1 12:21:52.861: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  1 12:21:54.819: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  1 12:21:54.819: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  1 12:21:54.871: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  1 12:21:55.061: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  1 12:21:55.061: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  1 12:21:56.291: INFO: Updating deployment nginx-deployment
Feb  1 12:21:56.291: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  1 12:21:57.216: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  1 12:22:01.944: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  1 12:22:03.199: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nc7qd/deployments/nginx-deployment,UID:53e57b8f-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193549,Generation:3,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-01 12:21:50 +0000 UTC 2020-02-01 12:21:09 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-01 12:21:56 +0000 UTC 2020-02-01 12:21:56 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  1 12:22:03.737: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nc7qd/replicasets/nginx-deployment-5c98f8fb5,UID:6bdc05df-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193603,Generation:3,CreationTimestamp:2020-02-01 12:21:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 53e57b8f-44ed-11ea-a994-fa163e34d433 0xc0018f7077 0xc0018f7078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 12:22:03.737: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  1 12:22:03.738: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nc7qd/replicasets/nginx-deployment-85ddf47c5d,UID:53eb2b63-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193598,Generation:3,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 53e57b8f-44ed-11ea-a994-fa163e34d433 0xc0018f7147 0xc0018f7148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  1 12:22:04.818: INFO: Pod "nginx-deployment-5c98f8fb5-5p4sl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5p4sl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-5p4sl,UID:70777610-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193559,Generation:0,CreationTimestamp:2020-02-01 12:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267ae27 0xc00267ae28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267ae90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267aff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.819: INFO: Pod "nginx-deployment-5c98f8fb5-759k6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-759k6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-759k6,UID:71379a24-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193571,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267b067 0xc00267b068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267b0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267b100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.819: INFO: Pod "nginx-deployment-5c98f8fb5-87hjn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-87hjn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-87hjn,UID:6bf4afcd-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193523,Generation:0,CreationTimestamp:2020-02-01 12:21:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267b267 0xc00267b268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267b2e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267b300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.819: INFO: Pod "nginx-deployment-5c98f8fb5-cc6cf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cc6cf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-cc6cf,UID:71ca9e1a-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193588,Generation:0,CreationTimestamp:2020-02-01 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267b487 0xc00267b488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267b4f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267b510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.819: INFO: Pod "nginx-deployment-5c98f8fb5-hqz4v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hqz4v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-hqz4v,UID:6bf42907-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193520,Generation:0,CreationTimestamp:2020-02-01 12:21:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267b587 0xc00267b588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267b660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267b680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.820: INFO: Pod "nginx-deployment-5c98f8fb5-jb88m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jb88m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-jb88m,UID:70777923-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193611,Generation:0,CreationTimestamp:2020-02-01 12:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267b7b7 0xc00267b7b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267b850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267b870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:22:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:22:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.820: INFO: Pod "nginx-deployment-5c98f8fb5-n7nls" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n7nls,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-n7nls,UID:6c3f9a52-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193526,Generation:0,CreationTimestamp:2020-02-01 12:21:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267b9e7 0xc00267b9e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267ba50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267ba70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.820: INFO: Pod "nginx-deployment-5c98f8fb5-pn6n4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pn6n4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-pn6n4,UID:6c36ba51-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193532,Generation:0,CreationTimestamp:2020-02-01 12:21:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267bb37 0xc00267bb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267bba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267bbc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.821: INFO: Pod "nginx-deployment-5c98f8fb5-pxlvs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pxlvs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-pxlvs,UID:71379243-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193573,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267bc87 0xc00267bc88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267bcf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267bd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.821: INFO: Pod "nginx-deployment-5c98f8fb5-qk5l6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qk5l6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-qk5l6,UID:7135cbe7-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193568,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267bd87 0xc00267bd88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267bdf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267be10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.821: INFO: Pod "nginx-deployment-5c98f8fb5-rl89g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rl89g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-rl89g,UID:70403d40-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193600,Generation:0,CreationTimestamp:2020-02-01 12:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267be87 0xc00267be88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00267bf00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00267bf20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.821: INFO: Pod "nginx-deployment-5c98f8fb5-wbzg6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wbzg6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-wbzg6,UID:71366c27-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193564,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc00267bfe7 0xc00267bfe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.822: INFO: Pod "nginx-deployment-5c98f8fb5-x8st2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x8st2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-5c98f8fb5-x8st2,UID:6be4c528-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193518,Generation:0,CreationTimestamp:2020-02-01 12:21:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bdc05df-44ed-11ea-a994-fa163e34d433 0xc0026720e7 0xc0026720e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.822: INFO: Pod "nginx-deployment-85ddf47c5d-64rjg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-64rjg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-64rjg,UID:713d296b-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193570,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc002672277 0xc002672278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.822: INFO: Pod "nginx-deployment-85ddf47c5d-6hhwz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6hhwz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-6hhwz,UID:54205b3b-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193446,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc0026723e7 0xc0026723e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-01 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://137a505516fb6902fda4cbbf640587c3a8485e7231e74a875f89949079e09096}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.822: INFO: Pod "nginx-deployment-85ddf47c5d-crwlp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-crwlp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-crwlp,UID:71cae03c-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193589,Generation:0,CreationTimestamp:2020-02-01 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc0026725c7 0xc0026725c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.823: INFO: Pod "nginx-deployment-85ddf47c5d-fssvr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fssvr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-fssvr,UID:71caeb85-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193591,Generation:0,CreationTimestamp:2020-02-01 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc0026726e7 0xc0026726e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.823: INFO: Pod "nginx-deployment-85ddf47c5d-fzpz9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fzpz9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-fzpz9,UID:542db723-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193442,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc0026727f7 0xc0026727f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026737a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.14,StartTime:2020-02-01 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7c56adb3122937d3d2e3c254af5eec9dcde34deaf2627617ed5345a594055f26}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.823: INFO: Pod "nginx-deployment-85ddf47c5d-gptj6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gptj6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-gptj6,UID:70248a84-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193593,Generation:0,CreationTimestamp:2020-02-01 12:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc002673867 0xc002673868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026738d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026738f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.823: INFO: Pod "nginx-deployment-85ddf47c5d-ll7gw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ll7gw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-ll7gw,UID:71caf2fd-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193590,Generation:0,CreationTimestamp:2020-02-01 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc0026739b7 0xc0026739b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002673a20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002673a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.823: INFO: Pod "nginx-deployment-85ddf47c5d-ncx9c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ncx9c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-ncx9c,UID:542e267e-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193458,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc002673ab7 0xc002673ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002673b20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002673b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-01 12:21:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6ede656eea564afbff26beb4a339d3d561846f1cd651cc75dfe00ef97828b477}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.824: INFO: Pod "nginx-deployment-85ddf47c5d-q8v27" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q8v27,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-q8v27,UID:53fdcb82-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193462,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc002673c07 0xc002673c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002673c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002673c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-01 12:21:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0d6a340c910f2596f6d93f55742d4db131520e972e9bdc74455092bf36d56358}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.824: INFO: Pod "nginx-deployment-85ddf47c5d-qqsnf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qqsnf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-qqsnf,UID:713d1fd0-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193574,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc002673d57 0xc002673d58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002673dc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002673de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.824: INFO: Pod "nginx-deployment-85ddf47c5d-rf5w5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rf5w5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-rf5w5,UID:542037ab-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193449,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc002673e57 0xc002673e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002673ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002673ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-01 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://47c075449ccfccff6ffce3d8d86079e3800db7b690c00e5178d09ff2496d2a97}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.824: INFO: Pod "nginx-deployment-85ddf47c5d-tk656" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tk656,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-tk656,UID:713c9181-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193572,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2c107 0xc001f2c108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2c170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2c190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.824: INFO: Pod "nginx-deployment-85ddf47c5d-tqvf4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tqvf4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-tqvf4,UID:54203911-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193453,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2c247 0xc001f2c248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2c2b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2c2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-01 12:21:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e9f079b50f4689fa9c0a6d18983778963878580a26f9d118d36003184f933ccc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.824: INFO: Pod "nginx-deployment-85ddf47c5d-vqtdq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vqtdq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-vqtdq,UID:71cba969-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193587,Generation:0,CreationTimestamp:2020-02-01 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2c3c7 0xc001f2c3c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2c430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2c450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.824: INFO: Pod "nginx-deployment-85ddf47c5d-vw6pz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vw6pz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-vw6pz,UID:71cacd17-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193586,Generation:0,CreationTimestamp:2020-02-01 12:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2c4c7 0xc001f2c4c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2c530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2c5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.825: INFO: Pod "nginx-deployment-85ddf47c5d-whwfq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-whwfq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-whwfq,UID:54093842-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193433,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2c637 0xc001f2c638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2d1e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2d200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:43 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-01 12:21:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d4f280bce3ce914a1abeefbca73a6643910b55465c3aee40e2df6290dc9da9e5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.825: INFO: Pod "nginx-deployment-85ddf47c5d-x558s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x558s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-x558s,UID:713d3c1f-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193575,Generation:0,CreationTimestamp:2020-02-01 12:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2d2c7 0xc001f2d2c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2d470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2d610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.825: INFO: Pod "nginx-deployment-85ddf47c5d-xhpbn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xhpbn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-xhpbn,UID:701bbaf7-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193578,Generation:0,CreationTimestamp:2020-02-01 12:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2d687 0xc001f2d688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2d6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2d710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.826: INFO: Pod "nginx-deployment-85ddf47c5d-xzqbf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xzqbf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-xzqbf,UID:7025486f-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193607,Generation:0,CreationTimestamp:2020-02-01 12:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2d7d7 0xc001f2d7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2d850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2d870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-01 12:21:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  1 12:22:04.826: INFO: Pod "nginx-deployment-85ddf47c5d-zfx2t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zfx2t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nc7qd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc7qd/pods/nginx-deployment-85ddf47c5d-zfx2t,UID:540939ec-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193406,Generation:0,CreationTimestamp:2020-02-01 12:21:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 53eb2b63-44ed-11ea-a994-fa163e34d433 0xc001f2d947 0xc001f2d948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzt5x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzt5x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xzt5x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f2daf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f2db10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:21:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-01 12:21:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-01 12:21:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://88a52b53c362f867309ab6c51b4b5dbf801c23d0ae52d7e6b189a706cbfe2c6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:22:04.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nc7qd" for this suite.
Feb  1 12:22:44.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:22:45.482: INFO: namespace: e2e-tests-deployment-nc7qd, resource: bindings, ignored listing per whitelist
Feb  1 12:22:45.568: INFO: namespace e2e-tests-deployment-nc7qd deletion completed in 38.854418255s

• [SLOW TEST:96.518 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:22:45.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  1 12:22:47.772: INFO: Waiting up to 5m0s for pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-v2w8g" to be "success or failure"
Feb  1 12:22:48.592: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 819.243955ms
Feb  1 12:22:51.304: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.531708692s
Feb  1 12:22:53.317: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.544746931s
Feb  1 12:22:55.457: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.684377834s
Feb  1 12:22:57.480: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.707771368s
Feb  1 12:22:59.854: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.081435688s
Feb  1 12:23:01.875: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.102911159s
Feb  1 12:23:04.227: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.455150982s
Feb  1 12:23:06.375: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.602210425s
Feb  1 12:23:08.479: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.706673333s
Feb  1 12:23:10.584: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.811455314s
Feb  1 12:23:12.648: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.875966744s
Feb  1 12:23:14.667: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.89450264s
Feb  1 12:23:16.697: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.924253532s
Feb  1 12:23:18.812: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.039457017s
Feb  1 12:23:20.829: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.056643705s
Feb  1 12:23:22.844: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.071762175s
Feb  1 12:23:25.057: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.284809555s
Feb  1 12:23:27.930: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.15771446s
STEP: Saw pod success
Feb  1 12:23:27.930: INFO: Pod "pod-8e82922d-44ed-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:23:27.952: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8e82922d-44ed-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 12:23:28.188: INFO: Waiting for pod pod-8e82922d-44ed-11ea-a88d-0242ac110005 to disappear
Feb  1 12:23:28.318: INFO: Pod pod-8e82922d-44ed-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:23:28.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-v2w8g" for this suite.
Feb  1 12:23:34.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:23:34.410: INFO: namespace: e2e-tests-emptydir-v2w8g, resource: bindings, ignored listing per whitelist
Feb  1 12:23:34.590: INFO: namespace e2e-tests-emptydir-v2w8g deletion completed in 6.233667541s

• [SLOW TEST:49.022 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:23:34.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  1 12:23:34.749: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  1 12:23:34.826: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  1 12:23:39.885: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  1 12:23:45.909: INFO: Creating deployment "test-rolling-update-deployment"
Feb  1 12:23:45.926: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  1 12:23:45.940: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  1 12:23:47.975: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  1 12:23:47.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156625, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:23:50.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156625, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:23:51.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156625, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:23:54.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156625, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:23:56.628: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  1 12:23:56.781: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-hs9nn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hs9nn/deployments/test-rolling-update-deployment,UID:b13427c3-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193998,Generation:1,CreationTimestamp:2020-02-01 12:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-01 12:23:46 +0000 UTC 2020-02-01 12:23:46 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-01 12:23:55 +0000 UTC 2020-02-01 12:23:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  1 12:23:56.785: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-hs9nn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hs9nn/replicasets/test-rolling-update-deployment-75db98fb4c,UID:b13c87e9-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193989,Generation:1,CreationTimestamp:2020-02-01 12:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b13427c3-44ed-11ea-a994-fa163e34d433 0xc000ec5f47 0xc000ec5f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  1 12:23:56.785: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  1 12:23:56.785: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-hs9nn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hs9nn/replicasets/test-rolling-update-controller,UID:aa8d3634-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193997,Generation:2,CreationTimestamp:2020-02-01 12:23:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b13427c3-44ed-11ea-a994-fa163e34d433 0xc000ec52ef 0xc000ec5660}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 12:23:56.790: INFO: Pod "test-rolling-update-deployment-75db98fb4c-qzj98" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-qzj98,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-hs9nn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hs9nn/pods/test-rolling-update-deployment-75db98fb4c-qzj98,UID:b13e7890-44ed-11ea-a994-fa163e34d433,ResourceVersion:20193988,Generation:0,CreationTimestamp:2020-02-01 12:23:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c b13c87e9-44ed-11ea-a994-fa163e34d433 0xc0014795f7 0xc0014795f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wzgfz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wzgfz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-wzgfz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001479990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014799b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:23:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:23:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:23:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:23:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-01 12:23:46 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-01 12:23:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f2d5a5b9efaac34cfdb703a1674a641b77c8500171efc5742b153452bd767323}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:23:56.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hs9nn" for this suite.
Feb  1 12:24:06.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:24:06.899: INFO: namespace: e2e-tests-deployment-hs9nn, resource: bindings, ignored listing per whitelist
Feb  1 12:24:07.161: INFO: namespace e2e-tests-deployment-hs9nn deletion completed in 10.365777122s

• [SLOW TEST:32.571 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:24:07.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  1 12:24:20.727: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:24:21.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-cvb9l" for this suite.
Feb  1 12:24:44.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:24:44.744: INFO: namespace: e2e-tests-replicaset-cvb9l, resource: bindings, ignored listing per whitelist
Feb  1 12:24:44.752: INFO: namespace e2e-tests-replicaset-cvb9l deletion completed in 22.927416121s

• [SLOW TEST:37.590 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:24:44.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:24:45.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4tcqf" for this suite.
Feb  1 12:25:09.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:25:09.255: INFO: namespace: e2e-tests-pods-4tcqf, resource: bindings, ignored listing per whitelist
Feb  1 12:25:09.387: INFO: namespace e2e-tests-pods-4tcqf deletion completed in 24.263680663s

• [SLOW TEST:24.635 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:25:09.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  1 12:25:09.807: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  1 12:25:14.819: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  1 12:25:18.842: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  1 12:25:20.857: INFO: Creating deployment "test-rollover-deployment"
Feb  1 12:25:20.898: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  1 12:25:22.963: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  1 12:25:23.002: INFO: Ensure that both replica sets have 1 created replica
Feb  1 12:25:23.012: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  1 12:25:23.036: INFO: Updating deployment test-rollover-deployment
Feb  1 12:25:23.036: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  1 12:25:25.417: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  1 12:25:25.430: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  1 12:25:25.441: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:25.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156724, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:27.487: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:27.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156724, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:29.467: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:29.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156724, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:31.459: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:31.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156724, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:33.466: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:33.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156732, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:35.465: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:35.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156732, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:37.511: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:37.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156732, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:39.496: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:39.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156732, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:41.467: INFO: all replica sets need to contain the pod-template-hash label
Feb  1 12:25:41.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156732, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:43.764: INFO: 
Feb  1 12:25:43.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156742, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716156721, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  1 12:25:45.641: INFO: 
Feb  1 12:25:45.641: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  1 12:25:45.672: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-cv9xc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cv9xc/deployments/test-rollover-deployment,UID:e9cc8b17-44ed-11ea-a994-fa163e34d433,ResourceVersion:20194280,Generation:2,CreationTimestamp:2020-02-01 12:25:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-01 12:25:21 +0000 UTC 2020-02-01 12:25:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-01 12:25:43 +0000 UTC 2020-02-01 12:25:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  1 12:25:45.679: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-cv9xc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cv9xc/replicasets/test-rollover-deployment-5b8479fdb6,UID:eb18625d-44ed-11ea-a994-fa163e34d433,ResourceVersion:20194269,Generation:2,CreationTimestamp:2020-02-01 12:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9cc8b17-44ed-11ea-a994-fa163e34d433 0xc002721117 0xc002721118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  1 12:25:45.679: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  1 12:25:45.679: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-cv9xc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cv9xc/replicasets/test-rollover-controller,UID:e31207a3-44ed-11ea-a994-fa163e34d433,ResourceVersion:20194279,Generation:2,CreationTimestamp:2020-02-01 12:25:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9cc8b17-44ed-11ea-a994-fa163e34d433 0xc002720eff 0xc002720f10}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 12:25:45.680: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-cv9xc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cv9xc/replicasets/test-rollover-deployment-58494b7559,UID:e9dfac69-44ed-11ea-a994-fa163e34d433,ResourceVersion:20194236,Generation:2,CreationTimestamp:2020-02-01 12:25:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e9cc8b17-44ed-11ea-a994-fa163e34d433 0xc002721047 0xc002721048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  1 12:25:45.687: INFO: Pod "test-rollover-deployment-5b8479fdb6-bgf78" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-bgf78,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-cv9xc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cv9xc/pods/test-rollover-deployment-5b8479fdb6-bgf78,UID:eb880de2-44ed-11ea-a994-fa163e34d433,ResourceVersion:20194255,Generation:0,CreationTimestamp:2020-02-01 12:25:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 eb18625d-44ed-11ea-a994-fa163e34d433 0xc002721cd7 0xc002721cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rj6dz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj6dz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rj6dz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002721d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002721d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:25:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:25:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:25:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:25:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-01 12:25:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-01 12:25:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4e24ca86f7a76ae4fec6250c974ff406d0f50376dc8c3a4e1b83320a217678a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:25:45.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-cv9xc" for this suite.
Feb  1 12:25:53.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:25:54.530: INFO: namespace: e2e-tests-deployment-cv9xc, resource: bindings, ignored listing per whitelist
Feb  1 12:25:54.665: INFO: namespace e2e-tests-deployment-cv9xc deletion completed in 8.971540611s

• [SLOW TEST:45.277 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:25:54.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-fe1818c0-44ed-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:25:55.134: INFO: Waiting up to 5m0s for pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-shgml" to be "success or failure"
Feb  1 12:25:55.144: INFO: Pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179063ms
Feb  1 12:25:57.160: INFO: Pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026375236s
Feb  1 12:25:59.174: INFO: Pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040398312s
Feb  1 12:26:01.483: INFO: Pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349302672s
Feb  1 12:26:03.884: INFO: Pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750605718s
Feb  1 12:26:05.901: INFO: Pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.767305653s
STEP: Saw pod success
Feb  1 12:26:05.901: INFO: Pod "pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:26:05.906: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  1 12:26:06.555: INFO: Waiting for pod pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005 to disappear
Feb  1 12:26:06.584: INFO: Pod pod-secrets-fe367221-44ed-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:26:06.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-shgml" for this suite.
Feb  1 12:26:12.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:26:13.031: INFO: namespace: e2e-tests-secrets-shgml, resource: bindings, ignored listing per whitelist
Feb  1 12:26:13.187: INFO: namespace e2e-tests-secrets-shgml deletion completed in 6.585807927s
STEP: Destroying namespace "e2e-tests-secret-namespace-cnzln" for this suite.
Feb  1 12:26:19.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:26:19.349: INFO: namespace: e2e-tests-secret-namespace-cnzln, resource: bindings, ignored listing per whitelist
Feb  1 12:26:19.469: INFO: namespace e2e-tests-secret-namespace-cnzln deletion completed in 6.282433634s

• [SLOW TEST:24.803 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:26:19.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  1 12:26:19.733: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  1 12:26:24.764: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:26:26.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-djcf2" for this suite.
Feb  1 12:26:33.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:26:34.568: INFO: namespace: e2e-tests-replication-controller-djcf2, resource: bindings, ignored listing per whitelist
Feb  1 12:26:34.933: INFO: namespace e2e-tests-replication-controller-djcf2 deletion completed in 8.8391126s

• [SLOW TEST:15.464 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:26:34.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-16a7a19e-44ee-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:26:36.157: INFO: Waiting up to 5m0s for pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-dq7l5" to be "success or failure"
Feb  1 12:26:36.191: INFO: Pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.885243ms
Feb  1 12:26:38.210: INFO: Pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05320403s
Feb  1 12:26:40.220: INFO: Pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062543153s
Feb  1 12:26:42.237: INFO: Pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080033831s
Feb  1 12:26:44.257: INFO: Pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099953458s
Feb  1 12:26:46.271: INFO: Pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11348276s
STEP: Saw pod success
Feb  1 12:26:46.271: INFO: Pod "pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:26:46.276: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  1 12:26:46.345: INFO: Waiting for pod pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005 to disappear
Feb  1 12:26:46.352: INFO: Pod pod-secrets-16a999af-44ee-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:26:46.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dq7l5" for this suite.
Feb  1 12:26:52.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:26:52.756: INFO: namespace: e2e-tests-secrets-dq7l5, resource: bindings, ignored listing per whitelist
Feb  1 12:26:52.881: INFO: namespace e2e-tests-secrets-dq7l5 deletion completed in 6.512861429s

• [SLOW TEST:17.947 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:26:52.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  1 12:26:53.028: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.55893ms)
Feb  1 12:26:53.037: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.516814ms)
Feb  1 12:26:53.098: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 60.883698ms)
Feb  1 12:26:53.106: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.246908ms)
Feb  1 12:26:53.112: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.526407ms)
Feb  1 12:26:53.117: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.354944ms)
Feb  1 12:26:53.129: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.274526ms)
Feb  1 12:26:53.138: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.489621ms)
Feb  1 12:26:53.144: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.831457ms)
Feb  1 12:26:53.149: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.418314ms)
Feb  1 12:26:53.153: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.601289ms)
Feb  1 12:26:53.158: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.325637ms)
Feb  1 12:26:53.166: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.534001ms)
Feb  1 12:26:53.170: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.224974ms)
Feb  1 12:26:53.175: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.370483ms)
Feb  1 12:26:53.181: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.719259ms)
Feb  1 12:26:53.195: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.381805ms)
Feb  1 12:26:53.204: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.894896ms)
Feb  1 12:26:53.220: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.290444ms)
Feb  1 12:26:53.234: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.103939ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:26:53.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-p7h2q" for this suite.
Feb  1 12:26:59.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:26:59.393: INFO: namespace: e2e-tests-proxy-p7h2q, resource: bindings, ignored listing per whitelist
Feb  1 12:26:59.509: INFO: namespace e2e-tests-proxy-p7h2q deletion completed in 6.263872272s

• [SLOW TEST:6.628 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:26:59.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb  1 12:26:59.692: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:26:59.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zwmnl" for this suite.
Feb  1 12:27:05.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:27:06.120: INFO: namespace: e2e-tests-kubectl-zwmnl, resource: bindings, ignored listing per whitelist
Feb  1 12:27:06.175: INFO: namespace e2e-tests-kubectl-zwmnl deletion completed in 6.306963872s

• [SLOW TEST:6.666 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:27:06.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  1 12:27:06.396: INFO: Waiting up to 5m0s for pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-nfth4" to be "success or failure"
Feb  1 12:27:06.404: INFO: Pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28091ms
Feb  1 12:27:08.507: INFO: Pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111144659s
Feb  1 12:27:10.561: INFO: Pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165339026s
Feb  1 12:27:12.808: INFO: Pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411924467s
Feb  1 12:27:14.816: INFO: Pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.420153907s
Feb  1 12:27:16.827: INFO: Pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.431671914s
STEP: Saw pod success
Feb  1 12:27:16.827: INFO: Pod "downward-api-28b1393b-44ee-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:27:16.831: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-28b1393b-44ee-11ea-a88d-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  1 12:27:17.625: INFO: Waiting for pod downward-api-28b1393b-44ee-11ea-a88d-0242ac110005 to disappear
Feb  1 12:27:17.695: INFO: Pod downward-api-28b1393b-44ee-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:27:17.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nfth4" for this suite.
Feb  1 12:27:23.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:27:24.009: INFO: namespace: e2e-tests-downward-api-nfth4, resource: bindings, ignored listing per whitelist
Feb  1 12:27:24.068: INFO: namespace e2e-tests-downward-api-nfth4 deletion completed in 6.202325498s

• [SLOW TEST:17.893 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:27:24.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb  1 12:27:24.320: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  1 12:27:24.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:27:26.572: INFO: stderr: ""
Feb  1 12:27:26.572: INFO: stdout: "service/redis-slave created\n"
Feb  1 12:27:26.572: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  1 12:27:26.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:27:27.222: INFO: stderr: ""
Feb  1 12:27:27.223: INFO: stdout: "service/redis-master created\n"
Feb  1 12:27:27.223: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  1 12:27:27.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:27:27.730: INFO: stderr: ""
Feb  1 12:27:27.730: INFO: stdout: "service/frontend created\n"
Feb  1 12:27:27.731: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  1 12:27:27.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:27:28.132: INFO: stderr: ""
Feb  1 12:27:28.132: INFO: stdout: "deployment.extensions/frontend created\n"
Feb  1 12:27:28.132: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  1 12:27:28.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:27:28.529: INFO: stderr: ""
Feb  1 12:27:28.529: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb  1 12:27:28.529: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  1 12:27:28.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:27:29.067: INFO: stderr: ""
Feb  1 12:27:29.067: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb  1 12:27:29.067: INFO: Waiting for all frontend pods to be Running.
Feb  1 12:27:59.119: INFO: Waiting for frontend to serve content.
Feb  1 12:27:59.369: INFO: Trying to add a new entry to the guestbook.
Feb  1 12:27:59.432: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  1 12:27:59.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:27:59.780: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:27:59.780: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  1 12:27:59.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:28:00.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:28:00.081: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  1 12:28:00.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:28:00.376: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:28:00.376: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  1 12:28:00.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:28:00.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:28:00.618: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  1 12:28:00.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:28:01.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:28:01.197: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  1 12:28:01.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nwdz5'
Feb  1 12:28:01.545: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:28:01.545: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:28:01.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nwdz5" for this suite.
Feb  1 12:28:53.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:28:53.763: INFO: namespace: e2e-tests-kubectl-nwdz5, resource: bindings, ignored listing per whitelist
Feb  1 12:28:53.887: INFO: namespace e2e-tests-kubectl-nwdz5 deletion completed in 52.324177373s

• [SLOW TEST:89.819 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:28:53.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb  1 12:28:54.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  1 12:28:54.365: INFO: stderr: ""
Feb  1 12:28:54.365: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:28:54.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2jptt" for this suite.
Feb  1 12:29:00.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:29:00.610: INFO: namespace: e2e-tests-kubectl-2jptt, resource: bindings, ignored listing per whitelist
Feb  1 12:29:00.892: INFO: namespace e2e-tests-kubectl-2jptt deletion completed in 6.520653481s

• [SLOW TEST:7.005 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:29:00.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:29:01.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-qh9zw" for this suite.
Feb  1 12:29:07.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:29:07.292: INFO: namespace: e2e-tests-services-qh9zw, resource: bindings, ignored listing per whitelist
Feb  1 12:29:07.393: INFO: namespace e2e-tests-services-qh9zw deletion completed in 6.252221206s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.500 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:29:07.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-717a1bd6-44ee-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  1 12:29:08.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-6d5n9" to be "success or failure"
Feb  1 12:29:08.706: INFO: Pod "pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.286893ms
Feb  1 12:29:10.873: INFO: Pod "pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189395008s
Feb  1 12:29:12.885: INFO: Pod "pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201489365s
Feb  1 12:29:14.940: INFO: Pod "pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255956619s
Feb  1 12:29:16.950: INFO: Pod "pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.26627445s
STEP: Saw pod success
Feb  1 12:29:16.950: INFO: Pod "pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:29:16.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  1 12:29:17.321: INFO: Waiting for pod pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005 to disappear
Feb  1 12:29:17.333: INFO: Pod pod-projected-configmaps-718b5243-44ee-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:29:17.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6d5n9" for this suite.
Feb  1 12:29:23.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:29:23.519: INFO: namespace: e2e-tests-projected-6d5n9, resource: bindings, ignored listing per whitelist
Feb  1 12:29:23.572: INFO: namespace e2e-tests-projected-6d5n9 deletion completed in 6.22821997s

• [SLOW TEST:16.179 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:29:23.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  1 12:29:23.942: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-86h8q,SelfLink:/api/v1/namespaces/e2e-tests-watch-86h8q/configmaps/e2e-watch-test-resource-version,UID:7a9e3908-44ee-11ea-a994-fa163e34d433,ResourceVersion:20194952,Generation:0,CreationTimestamp:2020-02-01 12:29:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  1 12:29:23.942: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-86h8q,SelfLink:/api/v1/namespaces/e2e-tests-watch-86h8q/configmaps/e2e-watch-test-resource-version,UID:7a9e3908-44ee-11ea-a994-fa163e34d433,ResourceVersion:20194953,Generation:0,CreationTimestamp:2020-02-01 12:29:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:29:23.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-86h8q" for this suite.
Feb  1 12:29:30.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:29:30.196: INFO: namespace: e2e-tests-watch-86h8q, resource: bindings, ignored listing per whitelist
Feb  1 12:29:30.237: INFO: namespace e2e-tests-watch-86h8q deletion completed in 6.287564919s

• [SLOW TEST:6.665 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:29:30.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  1 12:29:30.723: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  1 12:29:30.878: INFO: Number of nodes with available pods: 0
Feb  1 12:29:30.878: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:31.899: INFO: Number of nodes with available pods: 0
Feb  1 12:29:31.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:32.915: INFO: Number of nodes with available pods: 0
Feb  1 12:29:32.915: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:33.990: INFO: Number of nodes with available pods: 0
Feb  1 12:29:33.990: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:34.908: INFO: Number of nodes with available pods: 0
Feb  1 12:29:34.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:36.513: INFO: Number of nodes with available pods: 0
Feb  1 12:29:36.513: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:37.016: INFO: Number of nodes with available pods: 0
Feb  1 12:29:37.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:38.056: INFO: Number of nodes with available pods: 0
Feb  1 12:29:38.056: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:38.978: INFO: Number of nodes with available pods: 0
Feb  1 12:29:38.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:39.903: INFO: Number of nodes with available pods: 0
Feb  1 12:29:39.903: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:40.905: INFO: Number of nodes with available pods: 1
Feb  1 12:29:40.905: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  1 12:29:40.992: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:42.042: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:43.279: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:44.056: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:45.269: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:46.073: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:47.111: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:47.111: INFO: Pod daemon-set-khg7r is not available
Feb  1 12:29:48.047: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:48.048: INFO: Pod daemon-set-khg7r is not available
Feb  1 12:29:49.037: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:49.037: INFO: Pod daemon-set-khg7r is not available
Feb  1 12:29:50.043: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:50.043: INFO: Pod daemon-set-khg7r is not available
Feb  1 12:29:51.040: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:51.040: INFO: Pod daemon-set-khg7r is not available
Feb  1 12:29:52.048: INFO: Wrong image for pod: daemon-set-khg7r. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  1 12:29:52.048: INFO: Pod daemon-set-khg7r is not available
Feb  1 12:29:53.042: INFO: Pod daemon-set-hscb6 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  1 12:29:53.077: INFO: Number of nodes with available pods: 0
Feb  1 12:29:53.077: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:54.276: INFO: Number of nodes with available pods: 0
Feb  1 12:29:54.276: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:55.182: INFO: Number of nodes with available pods: 0
Feb  1 12:29:55.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:56.108: INFO: Number of nodes with available pods: 0
Feb  1 12:29:56.108: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:58.092: INFO: Number of nodes with available pods: 0
Feb  1 12:29:58.092: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:29:59.600: INFO: Number of nodes with available pods: 0
Feb  1 12:29:59.600: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:30:00.127: INFO: Number of nodes with available pods: 0
Feb  1 12:30:00.128: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  1 12:30:01.105: INFO: Number of nodes with available pods: 1
Feb  1 12:30:01.105: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b2qf8, will wait for the garbage collector to delete the pods
Feb  1 12:30:01.214: INFO: Deleting DaemonSet.extensions daemon-set took: 15.43107ms
Feb  1 12:30:01.314: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.363439ms
Feb  1 12:30:12.979: INFO: Number of nodes with available pods: 0
Feb  1 12:30:12.979: INFO: Number of running nodes: 0, number of available pods: 0
Feb  1 12:30:12.984: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b2qf8/daemonsets","resourceVersion":"20195062"},"items":null}

Feb  1 12:30:12.988: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b2qf8/pods","resourceVersion":"20195062"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:30:13.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-b2qf8" for this suite.
Feb  1 12:30:19.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:30:19.175: INFO: namespace: e2e-tests-daemonsets-b2qf8, resource: bindings, ignored listing per whitelist
Feb  1 12:30:19.463: INFO: namespace e2e-tests-daemonsets-b2qf8 deletion completed in 6.458605534s

• [SLOW TEST:49.225 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:30:19.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb  1 12:30:20.250: INFO: created pod pod-service-account-defaultsa
Feb  1 12:30:20.250: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  1 12:30:20.280: INFO: created pod pod-service-account-mountsa
Feb  1 12:30:20.280: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  1 12:30:20.536: INFO: created pod pod-service-account-nomountsa
Feb  1 12:30:20.536: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  1 12:30:20.759: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  1 12:30:20.759: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  1 12:30:21.538: INFO: created pod pod-service-account-mountsa-mountspec
Feb  1 12:30:21.538: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  1 12:30:21.562: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  1 12:30:21.562: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  1 12:30:22.552: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  1 12:30:22.552: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  1 12:30:22.828: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  1 12:30:22.828: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  1 12:30:23.685: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  1 12:30:23.685: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:30:23.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-fssxh" for this suite.
Feb  1 12:30:53.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:30:54.063: INFO: namespace: e2e-tests-svcaccounts-fssxh, resource: bindings, ignored listing per whitelist
Feb  1 12:30:54.098: INFO: namespace e2e-tests-svcaccounts-fssxh deletion completed in 30.399966476s

• [SLOW TEST:34.635 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:30:54.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:31:56.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-d868p" for this suite.
Feb  1 12:32:04.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:32:04.374: INFO: namespace: e2e-tests-container-runtime-d868p, resource: bindings, ignored listing per whitelist
Feb  1 12:32:04.464: INFO: namespace e2e-tests-container-runtime-d868p deletion completed in 8.205163466s

• [SLOW TEST:70.366 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:32:04.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  1 12:32:04.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-97f6c'
Feb  1 12:32:04.853: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  1 12:32:04.854: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  1 12:32:06.879: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-5krp9]
Feb  1 12:32:06.879: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-5krp9" in namespace "e2e-tests-kubectl-97f6c" to be "running and ready"
Feb  1 12:32:06.882: INFO: Pod "e2e-test-nginx-rc-5krp9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.682464ms
Feb  1 12:32:09.466: INFO: Pod "e2e-test-nginx-rc-5krp9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.587347762s
Feb  1 12:32:11.488: INFO: Pod "e2e-test-nginx-rc-5krp9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608737136s
Feb  1 12:32:13.503: INFO: Pod "e2e-test-nginx-rc-5krp9": Phase="Running", Reason="", readiness=true. Elapsed: 6.62458081s
Feb  1 12:32:13.503: INFO: Pod "e2e-test-nginx-rc-5krp9" satisfied condition "running and ready"
Feb  1 12:32:13.503: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-5krp9]
Feb  1 12:32:13.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-97f6c'
Feb  1 12:32:13.829: INFO: stderr: ""
Feb  1 12:32:13.829: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  1 12:32:13.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-97f6c'
Feb  1 12:32:14.044: INFO: stderr: ""
Feb  1 12:32:14.044: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:32:14.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-97f6c" for this suite.
Feb  1 12:32:38.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:32:38.269: INFO: namespace: e2e-tests-kubectl-97f6c, resource: bindings, ignored listing per whitelist
Feb  1 12:32:38.362: INFO: namespace e2e-tests-kubectl-97f6c deletion completed in 24.296696548s

• [SLOW TEST:33.897 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:32:38.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb  1 12:32:38.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zv5g2'
Feb  1 12:32:39.078: INFO: stderr: ""
Feb  1 12:32:39.078: INFO: stdout: "pod/pause created\n"
Feb  1 12:32:39.078: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  1 12:32:39.078: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-zv5g2" to be "running and ready"
Feb  1 12:32:39.084: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462447ms
Feb  1 12:32:41.185: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107465923s
Feb  1 12:32:43.200: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122239287s
Feb  1 12:32:45.225: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14663116s
Feb  1 12:32:47.243: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.165281602s
Feb  1 12:32:47.243: INFO: Pod "pause" satisfied condition "running and ready"
Feb  1 12:32:47.243: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  1 12:32:47.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-zv5g2'
Feb  1 12:32:47.471: INFO: stderr: ""
Feb  1 12:32:47.471: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  1 12:32:47.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-zv5g2'
Feb  1 12:32:47.600: INFO: stderr: ""
Feb  1 12:32:47.600: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  1 12:32:47.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-zv5g2'
Feb  1 12:32:47.747: INFO: stderr: ""
Feb  1 12:32:47.747: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  1 12:32:47.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-zv5g2'
Feb  1 12:32:47.864: INFO: stderr: ""
Feb  1 12:32:47.864: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb  1 12:32:47.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zv5g2'
Feb  1 12:32:48.135: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:32:48.135: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  1 12:32:48.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-zv5g2'
Feb  1 12:32:48.294: INFO: stderr: "No resources found.\n"
Feb  1 12:32:48.294: INFO: stdout: ""
Feb  1 12:32:48.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-zv5g2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  1 12:32:48.416: INFO: stderr: ""
Feb  1 12:32:48.417: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:32:48.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zv5g2" for this suite.
Feb  1 12:32:56.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:32:56.706: INFO: namespace: e2e-tests-kubectl-zv5g2, resource: bindings, ignored listing per whitelist
Feb  1 12:32:56.785: INFO: namespace e2e-tests-kubectl-zv5g2 deletion completed in 8.354512905s

• [SLOW TEST:18.423 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:32:56.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f9aaea30-44ee-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:32:57.002: INFO: Waiting up to 5m0s for pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-zmmkf" to be "success or failure"
Feb  1 12:32:57.010: INFO: Pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.583001ms
Feb  1 12:32:59.026: INFO: Pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023615667s
Feb  1 12:33:01.042: INFO: Pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039293622s
Feb  1 12:33:03.316: INFO: Pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313549219s
Feb  1 12:33:05.327: INFO: Pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324874839s
Feb  1 12:33:07.344: INFO: Pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.341541013s
STEP: Saw pod success
Feb  1 12:33:07.344: INFO: Pod "pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:33:07.351: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  1 12:33:08.108: INFO: Waiting for pod pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005 to disappear
Feb  1 12:33:08.413: INFO: Pod pod-secrets-f9ac04da-44ee-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:33:08.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zmmkf" for this suite.
Feb  1 12:33:14.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:33:14.804: INFO: namespace: e2e-tests-secrets-zmmkf, resource: bindings, ignored listing per whitelist
Feb  1 12:33:14.929: INFO: namespace e2e-tests-secrets-zmmkf deletion completed in 6.488908309s

• [SLOW TEST:18.143 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:33:14.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-rbfsm
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-rbfsm
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-rbfsm
Feb  1 12:33:15.196: INFO: Found 0 stateful pods, waiting for 1
Feb  1 12:33:25.211: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  1 12:33:25.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  1 12:33:26.079: INFO: stderr: "I0201 12:33:25.474751    2796 log.go:172] (0xc0006c6370) (0xc0006ec640) Create stream\nI0201 12:33:25.474973    2796 log.go:172] (0xc0006c6370) (0xc0006ec640) Stream added, broadcasting: 1\nI0201 12:33:25.480133    2796 log.go:172] (0xc0006c6370) Reply frame received for 1\nI0201 12:33:25.480166    2796 log.go:172] (0xc0006c6370) (0xc0005a4be0) Create stream\nI0201 12:33:25.480175    2796 log.go:172] (0xc0006c6370) (0xc0005a4be0) Stream added, broadcasting: 3\nI0201 12:33:25.481240    2796 log.go:172] (0xc0006c6370) Reply frame received for 3\nI0201 12:33:25.481276    2796 log.go:172] (0xc0006c6370) (0xc0006ec6e0) Create stream\nI0201 12:33:25.481288    2796 log.go:172] (0xc0006c6370) (0xc0006ec6e0) Stream added, broadcasting: 5\nI0201 12:33:25.482564    2796 log.go:172] (0xc0006c6370) Reply frame received for 5\nI0201 12:33:25.836576    2796 log.go:172] (0xc0006c6370) Data frame received for 3\nI0201 12:33:25.836673    2796 log.go:172] (0xc0005a4be0) (3) Data frame handling\nI0201 12:33:25.836699    2796 log.go:172] (0xc0005a4be0) (3) Data frame sent\nI0201 12:33:26.058730    2796 log.go:172] (0xc0006c6370) Data frame received for 1\nI0201 12:33:26.059001    2796 log.go:172] (0xc0006c6370) (0xc0005a4be0) Stream removed, broadcasting: 3\nI0201 12:33:26.059103    2796 log.go:172] (0xc0006ec640) (1) Data frame handling\nI0201 12:33:26.059157    2796 log.go:172] (0xc0006c6370) (0xc0006ec6e0) Stream removed, broadcasting: 5\nI0201 12:33:26.059353    2796 log.go:172] (0xc0006ec640) (1) Data frame sent\nI0201 12:33:26.059417    2796 log.go:172] (0xc0006c6370) (0xc0006ec640) Stream removed, broadcasting: 1\nI0201 12:33:26.059453    2796 log.go:172] (0xc0006c6370) Go away received\nI0201 12:33:26.060256    2796 log.go:172] (0xc0006c6370) (0xc0006ec640) Stream removed, broadcasting: 1\nI0201 12:33:26.060274    2796 log.go:172] (0xc0006c6370) (0xc0005a4be0) Stream removed, broadcasting: 3\nI0201 12:33:26.060282    2796 log.go:172] (0xc0006c6370) (0xc0006ec6e0) Stream removed, broadcasting: 5\n"
Feb  1 12:33:26.079: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  1 12:33:26.079: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  1 12:33:26.112: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  1 12:33:26.112: INFO: Waiting for statefulset status.replicas updated to 0
Feb  1 12:33:26.140: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  1 12:33:36.294: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:33:36.294: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:33:36.294: INFO: 
Feb  1 12:33:36.294: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  1 12:33:37.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.871057061s
Feb  1 12:33:39.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.231317439s
Feb  1 12:33:40.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.078114012s
Feb  1 12:33:41.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.002222162s
Feb  1 12:33:42.216: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.980822525s
Feb  1 12:33:44.767: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948801677s
Feb  1 12:33:45.980: INFO: Verifying statefulset ss doesn't scale past 3 for another 398.333767ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-rbfsm
Feb  1 12:33:47.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:33:48.279: INFO: stderr: "I0201 12:33:47.452666    2818 log.go:172] (0xc000706370) (0xc000722640) Create stream\nI0201 12:33:47.453048    2818 log.go:172] (0xc000706370) (0xc000722640) Stream added, broadcasting: 1\nI0201 12:33:47.461036    2818 log.go:172] (0xc000706370) Reply frame received for 1\nI0201 12:33:47.461079    2818 log.go:172] (0xc000706370) (0xc0007226e0) Create stream\nI0201 12:33:47.461094    2818 log.go:172] (0xc000706370) (0xc0007226e0) Stream added, broadcasting: 3\nI0201 12:33:47.462437    2818 log.go:172] (0xc000706370) Reply frame received for 3\nI0201 12:33:47.462461    2818 log.go:172] (0xc000706370) (0xc000722780) Create stream\nI0201 12:33:47.462469    2818 log.go:172] (0xc000706370) (0xc000722780) Stream added, broadcasting: 5\nI0201 12:33:47.463755    2818 log.go:172] (0xc000706370) Reply frame received for 5\nI0201 12:33:47.713112    2818 log.go:172] (0xc000706370) Data frame received for 3\nI0201 12:33:47.713337    2818 log.go:172] (0xc0007226e0) (3) Data frame handling\nI0201 12:33:47.713394    2818 log.go:172] (0xc0007226e0) (3) Data frame sent\nI0201 12:33:48.258859    2818 log.go:172] (0xc000706370) Data frame received for 1\nI0201 12:33:48.259216    2818 log.go:172] (0xc000706370) (0xc0007226e0) Stream removed, broadcasting: 3\nI0201 12:33:48.259315    2818 log.go:172] (0xc000722640) (1) Data frame handling\nI0201 12:33:48.259355    2818 log.go:172] (0xc000722640) (1) Data frame sent\nI0201 12:33:48.259370    2818 log.go:172] (0xc000706370) (0xc000722780) Stream removed, broadcasting: 5\nI0201 12:33:48.259709    2818 log.go:172] (0xc000706370) (0xc000722640) Stream removed, broadcasting: 1\nI0201 12:33:48.259925    2818 log.go:172] (0xc000706370) Go away received\nI0201 12:33:48.261589    2818 log.go:172] (0xc000706370) (0xc000722640) Stream removed, broadcasting: 1\nI0201 12:33:48.262611    2818 log.go:172] (0xc000706370) (0xc0007226e0) Stream removed, broadcasting: 3\nI0201 12:33:48.262640    2818 log.go:172] (0xc000706370) (0xc000722780) Stream removed, broadcasting: 5\n"
Feb  1 12:33:48.279: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  1 12:33:48.280: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  1 12:33:48.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:33:49.956: INFO: stderr: "I0201 12:33:49.307142    2839 log.go:172] (0xc0006b4370) (0xc0006da5a0) Create stream\nI0201 12:33:49.307586    2839 log.go:172] (0xc0006b4370) (0xc0006da5a0) Stream added, broadcasting: 1\nI0201 12:33:49.339433    2839 log.go:172] (0xc0006b4370) Reply frame received for 1\nI0201 12:33:49.339515    2839 log.go:172] (0xc0006b4370) (0xc0006da640) Create stream\nI0201 12:33:49.339532    2839 log.go:172] (0xc0006b4370) (0xc0006da640) Stream added, broadcasting: 3\nI0201 12:33:49.354063    2839 log.go:172] (0xc0006b4370) Reply frame received for 3\nI0201 12:33:49.354189    2839 log.go:172] (0xc0006b4370) (0xc000020c80) Create stream\nI0201 12:33:49.354205    2839 log.go:172] (0xc0006b4370) (0xc000020c80) Stream added, broadcasting: 5\nI0201 12:33:49.363525    2839 log.go:172] (0xc0006b4370) Reply frame received for 5\nI0201 12:33:49.710802    2839 log.go:172] (0xc0006b4370) Data frame received for 5\nI0201 12:33:49.710951    2839 log.go:172] (0xc000020c80) (5) Data frame handling\nI0201 12:33:49.710983    2839 log.go:172] (0xc000020c80) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0201 12:33:49.711035    2839 log.go:172] (0xc0006b4370) Data frame received for 3\nI0201 12:33:49.711050    2839 log.go:172] (0xc0006da640) (3) Data frame handling\nI0201 12:33:49.711059    2839 log.go:172] (0xc0006da640) (3) Data frame sent\nI0201 12:33:49.947858    2839 log.go:172] (0xc0006b4370) (0xc000020c80) Stream removed, broadcasting: 5\nI0201 12:33:49.948014    2839 log.go:172] (0xc0006b4370) Data frame received for 1\nI0201 12:33:49.948082    2839 log.go:172] (0xc0006b4370) (0xc0006da640) Stream removed, broadcasting: 3\nI0201 12:33:49.948108    2839 log.go:172] (0xc0006da5a0) (1) Data frame handling\nI0201 12:33:49.948128    2839 log.go:172] (0xc0006da5a0) (1) Data frame sent\nI0201 12:33:49.948139    2839 log.go:172] (0xc0006b4370) (0xc0006da5a0) Stream removed, broadcasting: 1\nI0201 12:33:49.948558    2839 log.go:172] (0xc0006b4370) (0xc0006da5a0) Stream removed, broadcasting: 1\nI0201 12:33:49.948568    2839 log.go:172] (0xc0006b4370) (0xc0006da640) Stream removed, broadcasting: 3\nI0201 12:33:49.948573    2839 log.go:172] (0xc0006b4370) (0xc000020c80) Stream removed, broadcasting: 5\n"
Feb  1 12:33:49.956: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  1 12:33:49.956: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  1 12:33:49.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:33:50.640: INFO: stderr: "I0201 12:33:50.187926    2859 log.go:172] (0xc0007442c0) (0xc0007c8640) Create stream\nI0201 12:33:50.188215    2859 log.go:172] (0xc0007442c0) (0xc0007c8640) Stream added, broadcasting: 1\nI0201 12:33:50.191886    2859 log.go:172] (0xc0007442c0) Reply frame received for 1\nI0201 12:33:50.191927    2859 log.go:172] (0xc0007442c0) (0xc000676d20) Create stream\nI0201 12:33:50.191940    2859 log.go:172] (0xc0007442c0) (0xc000676d20) Stream added, broadcasting: 3\nI0201 12:33:50.192759    2859 log.go:172] (0xc0007442c0) Reply frame received for 3\nI0201 12:33:50.192780    2859 log.go:172] (0xc0007442c0) (0xc0007c86e0) Create stream\nI0201 12:33:50.192788    2859 log.go:172] (0xc0007442c0) (0xc0007c86e0) Stream added, broadcasting: 5\nI0201 12:33:50.193626    2859 log.go:172] (0xc0007442c0) Reply frame received for 5\nI0201 12:33:50.294519    2859 log.go:172] (0xc0007442c0) Data frame received for 3\nI0201 12:33:50.294685    2859 log.go:172] (0xc000676d20) (3) Data frame handling\nI0201 12:33:50.294719    2859 log.go:172] (0xc000676d20) (3) Data frame sent\nI0201 12:33:50.294773    2859 log.go:172] (0xc0007442c0) Data frame received for 5\nI0201 12:33:50.294798    2859 log.go:172] (0xc0007c86e0) (5) Data frame handling\nI0201 12:33:50.294839    2859 log.go:172] (0xc0007c86e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0201 12:33:50.624604    2859 log.go:172] (0xc0007442c0) Data frame received for 1\nI0201 12:33:50.624903    2859 log.go:172] (0xc0007442c0) (0xc000676d20) Stream removed, broadcasting: 3\nI0201 12:33:50.625029    2859 log.go:172] (0xc0007c8640) (1) Data frame handling\nI0201 12:33:50.625065    2859 log.go:172] (0xc0007c8640) (1) Data frame sent\nI0201 12:33:50.625109    2859 log.go:172] (0xc0007442c0) (0xc0007c8640) Stream removed, broadcasting: 1\nI0201 12:33:50.625219    2859 log.go:172] (0xc0007442c0) (0xc0007c86e0) Stream removed, broadcasting: 5\nI0201 12:33:50.625345    2859 log.go:172] (0xc0007442c0) Go away received\nI0201 12:33:50.626258    2859 log.go:172] (0xc0007442c0) (0xc0007c8640) Stream removed, broadcasting: 1\nI0201 12:33:50.626288    2859 log.go:172] (0xc0007442c0) (0xc000676d20) Stream removed, broadcasting: 3\nI0201 12:33:50.626308    2859 log.go:172] (0xc0007442c0) (0xc0007c86e0) Stream removed, broadcasting: 5\n"
Feb  1 12:33:50.640: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  1 12:33:50.640: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  1 12:33:50.658: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 12:33:50.658: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  1 12:33:50.658: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  1 12:33:50.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  1 12:33:51.130: INFO: stderr: "I0201 12:33:50.832620    2882 log.go:172] (0xc000722370) (0xc000746640) Create stream\nI0201 12:33:50.832823    2882 log.go:172] (0xc000722370) (0xc000746640) Stream added, broadcasting: 1\nI0201 12:33:50.837079    2882 log.go:172] (0xc000722370) Reply frame received for 1\nI0201 12:33:50.837118    2882 log.go:172] (0xc000722370) (0xc00068ed20) Create stream\nI0201 12:33:50.837126    2882 log.go:172] (0xc000722370) (0xc00068ed20) Stream added, broadcasting: 3\nI0201 12:33:50.837960    2882 log.go:172] (0xc000722370) Reply frame received for 3\nI0201 12:33:50.837977    2882 log.go:172] (0xc000722370) (0xc0007466e0) Create stream\nI0201 12:33:50.837982    2882 log.go:172] (0xc000722370) (0xc0007466e0) Stream added, broadcasting: 5\nI0201 12:33:50.838540    2882 log.go:172] (0xc000722370) Reply frame received for 5\nI0201 12:33:50.957271    2882 log.go:172] (0xc000722370) Data frame received for 3\nI0201 12:33:50.957387    2882 log.go:172] (0xc00068ed20) (3) Data frame handling\nI0201 12:33:50.957425    2882 log.go:172] (0xc00068ed20) (3) Data frame sent\nI0201 12:33:51.119101    2882 log.go:172] (0xc000722370) (0xc00068ed20) Stream removed, broadcasting: 3\nI0201 12:33:51.119291    2882 log.go:172] (0xc000722370) Data frame received for 1\nI0201 12:33:51.119310    2882 log.go:172] (0xc000722370) (0xc0007466e0) Stream removed, broadcasting: 5\nI0201 12:33:51.119345    2882 log.go:172] (0xc000746640) (1) Data frame handling\nI0201 12:33:51.119373    2882 log.go:172] (0xc000746640) (1) Data frame sent\nI0201 12:33:51.119385    2882 log.go:172] (0xc000722370) (0xc000746640) Stream removed, broadcasting: 1\nI0201 12:33:51.119404    2882 log.go:172] (0xc000722370) Go away received\nI0201 12:33:51.120423    2882 log.go:172] (0xc000722370) (0xc000746640) Stream removed, broadcasting: 1\nI0201 12:33:51.120632    2882 log.go:172] (0xc000722370) (0xc00068ed20) Stream removed, broadcasting: 3\nI0201 12:33:51.120656    2882 log.go:172] (0xc000722370) (0xc0007466e0) Stream removed, broadcasting: 5\n"
Feb  1 12:33:51.130: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  1 12:33:51.130: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  1 12:33:51.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  1 12:33:51.589: INFO: stderr: "I0201 12:33:51.305581    2904 log.go:172] (0xc00069c370) (0xc00092e640) Create stream\nI0201 12:33:51.305830    2904 log.go:172] (0xc00069c370) (0xc00092e640) Stream added, broadcasting: 1\nI0201 12:33:51.311078    2904 log.go:172] (0xc00069c370) Reply frame received for 1\nI0201 12:33:51.311120    2904 log.go:172] (0xc00069c370) (0xc000532c80) Create stream\nI0201 12:33:51.311143    2904 log.go:172] (0xc00069c370) (0xc000532c80) Stream added, broadcasting: 3\nI0201 12:33:51.311992    2904 log.go:172] (0xc00069c370) Reply frame received for 3\nI0201 12:33:51.312026    2904 log.go:172] (0xc00069c370) (0xc000614000) Create stream\nI0201 12:33:51.312047    2904 log.go:172] (0xc00069c370) (0xc000614000) Stream added, broadcasting: 5\nI0201 12:33:51.313141    2904 log.go:172] (0xc00069c370) Reply frame received for 5\nI0201 12:33:51.455184    2904 log.go:172] (0xc00069c370) Data frame received for 3\nI0201 12:33:51.455309    2904 log.go:172] (0xc000532c80) (3) Data frame handling\nI0201 12:33:51.455348    2904 log.go:172] (0xc000532c80) (3) Data frame sent\nI0201 12:33:51.577762    2904 log.go:172] (0xc00069c370) Data frame received for 1\nI0201 12:33:51.577883    2904 log.go:172] (0xc00069c370) (0xc000532c80) Stream removed, broadcasting: 3\nI0201 12:33:51.577929    2904 log.go:172] (0xc00092e640) (1) Data frame handling\nI0201 12:33:51.577952    2904 log.go:172] (0xc00092e640) (1) Data frame sent\nI0201 12:33:51.577989    2904 log.go:172] (0xc00069c370) (0xc000614000) Stream removed, broadcasting: 5\nI0201 12:33:51.578016    2904 log.go:172] (0xc00069c370) (0xc00092e640) Stream removed, broadcasting: 1\nI0201 12:33:51.578039    2904 log.go:172] (0xc00069c370) Go away received\nI0201 12:33:51.579039    2904 log.go:172] (0xc00069c370) (0xc00092e640) Stream removed, broadcasting: 1\nI0201 12:33:51.579118    2904 log.go:172] (0xc00069c370) (0xc000532c80) Stream removed, broadcasting: 3\nI0201 12:33:51.579125    2904 log.go:172] (0xc00069c370) (0xc000614000) Stream removed, broadcasting: 5\n"
Feb  1 12:33:51.589: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  1 12:33:51.589: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  1 12:33:51.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  1 12:33:52.146: INFO: stderr: "I0201 12:33:51.761710    2925 log.go:172] (0xc0006d22c0) (0xc0006f6640) Create stream\nI0201 12:33:51.761951    2925 log.go:172] (0xc0006d22c0) (0xc0006f6640) Stream added, broadcasting: 1\nI0201 12:33:51.766281    2925 log.go:172] (0xc0006d22c0) Reply frame received for 1\nI0201 12:33:51.766307    2925 log.go:172] (0xc0006d22c0) (0xc00076edc0) Create stream\nI0201 12:33:51.766316    2925 log.go:172] (0xc0006d22c0) (0xc00076edc0) Stream added, broadcasting: 3\nI0201 12:33:51.767222    2925 log.go:172] (0xc0006d22c0) Reply frame received for 3\nI0201 12:33:51.767243    2925 log.go:172] (0xc0006d22c0) (0xc0003ac000) Create stream\nI0201 12:33:51.767252    2925 log.go:172] (0xc0006d22c0) (0xc0003ac000) Stream added, broadcasting: 5\nI0201 12:33:51.768118    2925 log.go:172] (0xc0006d22c0) Reply frame received for 5\nI0201 12:33:51.921013    2925 log.go:172] (0xc0006d22c0) Data frame received for 3\nI0201 12:33:51.921153    2925 log.go:172] (0xc00076edc0) (3) Data frame handling\nI0201 12:33:51.921196    2925 log.go:172] (0xc00076edc0) (3) Data frame sent\nI0201 12:33:52.136790    2925 log.go:172] (0xc0006d22c0) Data frame received for 1\nI0201 12:33:52.136891    2925 log.go:172] (0xc0006f6640) (1) Data frame handling\nI0201 12:33:52.136907    2925 log.go:172] (0xc0006f6640) (1) Data frame sent\nI0201 12:33:52.137162    2925 log.go:172] (0xc0006d22c0) (0xc0006f6640) Stream removed, broadcasting: 1\nI0201 12:33:52.137594    2925 log.go:172] (0xc0006d22c0) (0xc00076edc0) Stream removed, broadcasting: 3\nI0201 12:33:52.137716    2925 log.go:172] (0xc0006d22c0) (0xc0003ac000) Stream removed, broadcasting: 5\nI0201 12:33:52.137779    2925 log.go:172] (0xc0006d22c0) (0xc0006f6640) Stream removed, broadcasting: 1\nI0201 12:33:52.137789    2925 log.go:172] (0xc0006d22c0) (0xc00076edc0) Stream removed, broadcasting: 3\nI0201 12:33:52.137794    2925 log.go:172] (0xc0006d22c0) (0xc0003ac000) Stream removed, broadcasting: 5\n"
Feb  1 12:33:52.147: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  1 12:33:52.147: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  1 12:33:52.147: INFO: Waiting for statefulset status.replicas updated to 0
Feb  1 12:33:52.155: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  1 12:34:02.197: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  1 12:34:02.197: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  1 12:34:02.197: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  1 12:34:02.232: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:02.232: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:02.232: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:02.232: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:02.232: INFO: 
Feb  1 12:34:02.232: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:03.405: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:03.405: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:03.405: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:03.405: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:03.405: INFO: 
Feb  1 12:34:03.405: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:04.668: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:04.668: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:04.668: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:04.668: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:04.668: INFO: 
Feb  1 12:34:04.668: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:05.683: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:05.684: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:05.684: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:05.684: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:05.684: INFO: 
Feb  1 12:34:05.684: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:06.988: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:06.988: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:06.988: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:06.988: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:06.988: INFO: 
Feb  1 12:34:06.988: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:07.999: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:07.999: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:07.999: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:07.999: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:07.999: INFO: 
Feb  1 12:34:07.999: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:09.019: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:09.019: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:09.019: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:09.019: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:09.019: INFO: 
Feb  1 12:34:09.019: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:10.032: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:10.032: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:10.032: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:10.032: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:10.032: INFO: 
Feb  1 12:34:10.032: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:11.050: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:11.050: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:15 +0000 UTC  }]
Feb  1 12:34:11.050: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:11.050: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:11.050: INFO: 
Feb  1 12:34:11.050: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  1 12:34:12.073: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  1 12:34:12.073: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-01 12:33:36 +0000 UTC  }]
Feb  1 12:34:12.073: INFO: 
Feb  1 12:34:12.073: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-rbfsm
Feb  1 12:34:13.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:34:13.302: INFO: rc: 1
Feb  1 12:34:13.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00152cf60 exit status 1   true [0xc001c9eaa0 0xc001c9eab8 0xc001c9ead0] [0xc001c9eaa0 0xc001c9eab8 0xc001c9ead0] [0xc001c9eab0 0xc001c9eac8] [0x935700 0x935700] 0xc0014973e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  1 12:34:23.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:34:23.492: INFO: rc: 1
Feb  1 12:34:23.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a77e00 exit status 1   true [0xc0003e76f0 0xc0003e7720 0xc0003e7768] [0xc0003e76f0 0xc0003e7720 0xc0003e7768] [0xc0003e7718 0xc0003e7750] [0x935700 0x935700] 0xc001a73da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:34:33.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:34:33.703: INFO: rc: 1
Feb  1 12:34:33.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00152d080 exit status 1   true [0xc001c9ead8 0xc001c9eaf0 0xc001c9eb08] [0xc001c9ead8 0xc001c9eaf0 0xc001c9eb08] [0xc001c9eae8 0xc001c9eb00] [0x935700 0x935700] 0xc0014979e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:34:43.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:34:43.870: INFO: rc: 1
Feb  1 12:34:43.870: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a77f50 exit status 1   true [0xc0003e7770 0xc0003e7790 0xc0003e77c0] [0xc0003e7770 0xc0003e7790 0xc0003e77c0] [0xc0003e7788 0xc0003e77a8] [0x935700 0x935700] 0xc0017ea240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:34:53.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:34:54.019: INFO: rc: 1
Feb  1 12:34:54.019: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00120e360 exit status 1   true [0xc001e52d70 0xc001e52d88 0xc001e52da0] [0xc001e52d70 0xc001e52d88 0xc001e52da0] [0xc001e52d80 0xc001e52d98] [0x935700 0x935700] 0xc0025d0480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:35:04.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:35:04.185: INFO: rc: 1
Feb  1 12:35:04.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00172c120 exit status 1   true [0xc00000e018 0xc001e52010 0xc001e52028] [0xc00000e018 0xc001e52010 0xc001e52028] [0xc001e52008 0xc001e52020] [0x935700 0x935700] 0xc000c0e2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:35:14.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:35:14.305: INFO: rc: 1
Feb  1 12:35:14.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00172c240 exit status 1   true [0xc001e52030 0xc001e52048 0xc001e52070] [0xc001e52030 0xc001e52048 0xc001e52070] [0xc001e52040 0xc001e52060] [0x935700 0x935700] 0xc000c0e540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:35:24.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:35:24.468: INFO: rc: 1
Feb  1 12:35:24.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00172c390 exit status 1   true [0xc001e52088 0xc001e520c8 0xc001e520f0] [0xc001e52088 0xc001e520c8 0xc001e520f0] [0xc001e520c0 0xc001e520d8] [0x935700 0x935700] 0xc000c0e7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:35:34.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:35:34.617: INFO: rc: 1
Feb  1 12:35:34.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000df61b0 exit status 1   true [0xc001b72000 0xc001b72018 0xc001b72030] [0xc001b72000 0xc001b72018 0xc001b72030] [0xc001b72010 0xc001b72028] [0x935700 0x935700] 0xc0009f2a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:35:44.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:35:44.774: INFO: rc: 1
Feb  1 12:35:44.774: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00172c4e0 exit status 1   true [0xc001e52108 0xc001e52120 0xc001e52158] [0xc001e52108 0xc001e52120 0xc001e52158] [0xc001e52118 0xc001e52140] [0x935700 0x935700] 0xc000c0eb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:35:54.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:35:54.909: INFO: rc: 1
Feb  1 12:35:54.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000df6480 exit status 1   true [0xc001b72038 0xc001b72050 0xc001b72068] [0xc001b72038 0xc001b72050 0xc001b72068] [0xc001b72048 0xc001b72060] [0x935700 0x935700] 0xc0009f3c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:36:04.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:36:05.113: INFO: rc: 1
Feb  1 12:36:05.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000ba0180 exit status 1   true [0xc001456000 0xc001456018 0xc001456030] [0xc001456000 0xc001456018 0xc001456030] [0xc001456010 0xc001456028] [0x935700 0x935700] 0xc001bfe240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:36:15.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:36:15.375: INFO: rc: 1
Feb  1 12:36:15.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000df6720 exit status 1   true [0xc001b72070 0xc001b72088 0xc001b720a0] [0xc001b72070 0xc001b72088 0xc001b720a0] [0xc001b72080 0xc001b72098] [0x935700 0x935700] 0xc001b56060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:36:25.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:36:25.548: INFO: rc: 1
Feb  1 12:36:25.548: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000df6840 exit status 1   true [0xc001b720a8 0xc001b720c0 0xc001b720d8] [0xc001b720a8 0xc001b720c0 0xc001b720d8] [0xc001b720b8 0xc001b720d0] [0x935700 0x935700] 0xc001b56480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:36:35.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:36:35.690: INFO: rc: 1
Feb  1 12:36:35.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000ba0300 exit status 1   true [0xc001456038 0xc001456050 0xc001456068] [0xc001456038 0xc001456050 0xc001456068] [0xc001456048 0xc001456060] [0x935700 0x935700] 0xc001bfe4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:36:45.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:36:45.886: INFO: rc: 1
Feb  1 12:36:45.886: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00172c810 exit status 1   true [0xc001e52160 0xc001e52178 0xc001e52198] [0xc001e52160 0xc001e52178 0xc001e52198] [0xc001e52170 0xc001e52188] [0x935700 0x935700] 0xc000c0f1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:36:55.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:36:56.028: INFO: rc: 1
Feb  1 12:36:56.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be4000 exit status 1   true [0xc00050a0b0 0xc00050a160 0xc00050a250] [0xc00050a0b0 0xc00050a160 0xc00050a250] [0xc00050a0e0 0xc00050a1e8] [0x935700 0x935700] 0xc001872300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:37:06.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:37:06.199: INFO: rc: 1
Feb  1 12:37:06.199: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be4360 exit status 1   true [0xc00000e018 0xc001456008 0xc001456020] [0xc00000e018 0xc001456008 0xc001456020] [0xc001456000 0xc001456018] [0x935700 0x935700] 0xc0009f2a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:37:16.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:37:16.371: INFO: rc: 1
Feb  1 12:37:16.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be4540 exit status 1   true [0xc001456028 0xc001456040 0xc001456058] [0xc001456028 0xc001456040 0xc001456058] [0xc001456038 0xc001456050] [0x935700 0x935700] 0xc0009f3c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:37:26.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:37:26.588: INFO: rc: 1
Feb  1 12:37:26.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be4690 exit status 1   true [0xc001456060 0xc001456078 0xc001456090] [0xc001456060 0xc001456078 0xc001456090] [0xc001456070 0xc001456088] [0x935700 0x935700] 0xc000c0e060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:37:36.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:37:36.789: INFO: rc: 1
Feb  1 12:37:36.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002268210 exit status 1   true [0xc001b72000 0xc001b72018 0xc001b72030] [0xc001b72000 0xc001b72018 0xc001b72030] [0xc001b72010 0xc001b72028] [0x935700 0x935700] 0xc001bfe1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:37:46.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:37:46.955: INFO: rc: 1
Feb  1 12:37:46.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000ba01b0 exit status 1   true [0xc00050a270 0xc00050a2a0 0xc00050a320] [0xc00050a270 0xc00050a2a0 0xc00050a320] [0xc00050a280 0xc00050a2f8] [0x935700 0x935700] 0xc001b56000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:37:56.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:37:57.141: INFO: rc: 1
Feb  1 12:37:57.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be47e0 exit status 1   true [0xc001456098 0xc0014560b0 0xc0014560c8] [0xc001456098 0xc0014560b0 0xc0014560c8] [0xc0014560a8 0xc0014560c0] [0x935700 0x935700] 0xc000c0e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:38:07.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:38:07.315: INFO: rc: 1
Feb  1 12:38:07.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000ba0330 exit status 1   true [0xc00050a328 0xc00050a390 0xc00050a418] [0xc00050a328 0xc00050a390 0xc00050a418] [0xc00050a380 0xc00050a3f8] [0x935700 0x935700] 0xc001b563c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:38:17.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:38:17.531: INFO: rc: 1
Feb  1 12:38:17.531: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000ba04b0 exit status 1   true [0xc00050a4f0 0xc00050a5d0 0xc00050a698] [0xc00050a4f0 0xc00050a5d0 0xc00050a698] [0xc00050a5a0 0xc00050a680] [0x935700 0x935700] 0xc001b56720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:38:27.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:38:27.731: INFO: rc: 1
Feb  1 12:38:27.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be4ab0 exit status 1   true [0xc0014560d0 0xc0014560e8 0xc001456100] [0xc0014560d0 0xc0014560e8 0xc001456100] [0xc0014560e0 0xc0014560f8] [0x935700 0x935700] 0xc000c0e660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:38:37.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:38:37.918: INFO: rc: 1
Feb  1 12:38:37.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000be4c60 exit status 1   true [0xc001456108 0xc001456120 0xc001456138] [0xc001456108 0xc001456120 0xc001456138] [0xc001456118 0xc001456130] [0x935700 0x935700] 0xc000c0e900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:38:47.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:38:48.104: INFO: rc: 1
Feb  1 12:38:48.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000ba06f0 exit status 1   true [0xc00050a6f8 0xc00050a7a8 0xc00050a8e8] [0xc00050a6f8 0xc00050a7a8 0xc00050a8e8] [0xc00050a748 0xc00050a878] [0x935700 0x935700] 0xc001b56de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:38:58.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:38:58.291: INFO: rc: 1
Feb  1 12:38:58.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00172c0f0 exit status 1   true [0xc001e52008 0xc001e52020 0xc001e52038] [0xc001e52008 0xc001e52020 0xc001e52038] [0xc001e52018 0xc001e52030] [0x935700 0x935700] 0xc001926a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:39:08.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:39:08.522: INFO: rc: 1
Feb  1 12:39:08.522: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00172c240 exit status 1   true [0xc0000e8288 0xc001e52048 0xc001e52070] [0xc0000e8288 0xc001e52048 0xc001e52070] [0xc001e52040 0xc001e52060] [0x935700 0x935700] 0xc0009f2a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb  1 12:39:18.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rbfsm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  1 12:39:18.677: INFO: rc: 1
Feb  1 12:39:18.677: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Feb  1 12:39:18.677: INFO: Scaling statefulset ss to 0
Feb  1 12:39:18.692: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  1 12:39:18.694: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rbfsm
Feb  1 12:39:18.697: INFO: Scaling statefulset ss to 0
Feb  1 12:39:18.704: INFO: Waiting for statefulset status.replicas updated to 0
Feb  1 12:39:18.706: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:39:18.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-rbfsm" for this suite.
Feb  1 12:39:26.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:39:26.994: INFO: namespace: e2e-tests-statefulset-rbfsm, resource: bindings, ignored listing per whitelist
Feb  1 12:39:27.196: INFO: namespace e2e-tests-statefulset-rbfsm deletion completed in 8.453408766s

• [SLOW TEST:372.267 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:39:27.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:39:37.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-zpnlk" for this suite.
Feb  1 12:40:25.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:40:25.905: INFO: namespace: e2e-tests-kubelet-test-zpnlk, resource: bindings, ignored listing per whitelist
Feb  1 12:40:25.985: INFO: namespace e2e-tests-kubelet-test-zpnlk deletion completed in 48.338191264s

• [SLOW TEST:58.789 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:40:25.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-0560c5fa-44f0-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:40:26.157: INFO: Waiting up to 5m0s for pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-qvr4s" to be "success or failure"
Feb  1 12:40:26.169: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.257985ms
Feb  1 12:40:28.502: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344943465s
Feb  1 12:40:30.527: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369804251s
Feb  1 12:40:32.640: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482675286s
Feb  1 12:40:34.676: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519070667s
Feb  1 12:40:36.947: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.790475372s
Feb  1 12:40:39.064: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.907021437s
STEP: Saw pod success
Feb  1 12:40:39.064: INFO: Pod "pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:40:39.094: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  1 12:40:39.538: INFO: Waiting for pod pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005 to disappear
Feb  1 12:40:39.545: INFO: Pod pod-secrets-05616b24-44f0-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:40:39.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qvr4s" for this suite.
Feb  1 12:40:47.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:40:47.744: INFO: namespace: e2e-tests-secrets-qvr4s, resource: bindings, ignored listing per whitelist
Feb  1 12:40:47.753: INFO: namespace e2e-tests-secrets-qvr4s deletion completed in 8.197511044s

• [SLOW TEST:21.768 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:40:47.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  1 12:40:47.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-sqch5'
Feb  1 12:40:49.962: INFO: stderr: ""
Feb  1 12:40:49.962: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  1 12:41:00.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-sqch5 -o json'
Feb  1 12:41:00.186: INFO: stderr: ""
Feb  1 12:41:00.186: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-01T12:40:49Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-sqch5\",\n        \"resourceVersion\": \"20196315\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-sqch5/pods/e2e-test-nginx-pod\",\n        \"uid\": \"138f26b5-44f0-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-z6fjq\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-z6fjq\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-z6fjq\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-01T12:40:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-01T12:40:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-01T12:40:58Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-01T12:40:49Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://bb4b85fd21a851fabb992ba0e3021ba2de9e5fc6cbfef928b9b3f831ae4b0aa4\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-01T12:40:58Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-01T12:40:50Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  1 12:41:00.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-sqch5'
Feb  1 12:41:00.732: INFO: stderr: ""
Feb  1 12:41:00.733: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb  1 12:41:00.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-sqch5'
Feb  1 12:41:09.242: INFO: stderr: ""
Feb  1 12:41:09.242: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:41:09.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sqch5" for this suite.
Feb  1 12:41:15.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:41:15.342: INFO: namespace: e2e-tests-kubectl-sqch5, resource: bindings, ignored listing per whitelist
Feb  1 12:41:15.413: INFO: namespace e2e-tests-kubectl-sqch5 deletion completed in 6.161844631s

• [SLOW TEST:27.660 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:41:15.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb  1 12:41:15.576: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-jtb54" to be "success or failure"
Feb  1 12:41:15.731: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 155.869308ms
Feb  1 12:41:17.752: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176245182s
Feb  1 12:41:19.770: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194316524s
Feb  1 12:41:21.796: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220684084s
Feb  1 12:41:23.837: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261527724s
Feb  1 12:41:25.867: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.291708881s
Feb  1 12:41:27.880: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.304092884s
STEP: Saw pod success
Feb  1 12:41:27.880: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  1 12:41:27.883: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  1 12:41:27.953: INFO: Waiting for pod pod-host-path-test to disappear
Feb  1 12:41:27.966: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:41:27.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-jtb54" for this suite.
Feb  1 12:41:34.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:41:34.229: INFO: namespace: e2e-tests-hostpath-jtb54, resource: bindings, ignored listing per whitelist
Feb  1 12:41:34.231: INFO: namespace e2e-tests-hostpath-jtb54 deletion completed in 6.251952143s

• [SLOW TEST:18.818 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:41:34.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  1 12:41:34.434: INFO: Waiting up to 5m0s for pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-qcnqq" to be "success or failure"
Feb  1 12:41:34.605: INFO: Pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 170.42703ms
Feb  1 12:41:36.625: INFO: Pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191126319s
Feb  1 12:41:38.682: INFO: Pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247861809s
Feb  1 12:41:40.701: INFO: Pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267018973s
Feb  1 12:41:42.717: INFO: Pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282433177s
Feb  1 12:41:44.729: INFO: Pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.294905944s
STEP: Saw pod success
Feb  1 12:41:44.729: INFO: Pod "pod-2e14bc5f-44f0-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:41:44.733: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2e14bc5f-44f0-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 12:41:44.798: INFO: Waiting for pod pod-2e14bc5f-44f0-11ea-a88d-0242ac110005 to disappear
Feb  1 12:41:44.810: INFO: Pod pod-2e14bc5f-44f0-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:41:44.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qcnqq" for this suite.
Feb  1 12:41:50.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:41:51.014: INFO: namespace: e2e-tests-emptydir-qcnqq, resource: bindings, ignored listing per whitelist
Feb  1 12:41:51.056: INFO: namespace e2e-tests-emptydir-qcnqq deletion completed in 6.192850006s

• [SLOW TEST:16.824 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:41:51.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb  1 12:41:51.778: INFO: Waiting up to 5m0s for pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c" in namespace "e2e-tests-svcaccounts-cfl8v" to be "success or failure"
Feb  1 12:41:51.886: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 107.441799ms
Feb  1 12:41:53.934: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156065121s
Feb  1 12:41:56.003: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224397007s
Feb  1 12:41:58.017: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239075788s
Feb  1 12:42:00.041: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26251319s
Feb  1 12:42:02.108: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.33029629s
Feb  1 12:42:04.305: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.526881476s
Feb  1 12:42:06.321: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.543279119s
Feb  1 12:42:08.332: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Running", Reason="", readiness=false. Elapsed: 16.554206385s
Feb  1 12:42:10.345: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.566407303s
STEP: Saw pod success
Feb  1 12:42:10.345: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c" satisfied condition "success or failure"
Feb  1 12:42:10.350: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c container token-test: 
STEP: delete the pod
Feb  1 12:42:11.352: INFO: Waiting for pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c to disappear
Feb  1 12:42:11.368: INFO: Pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-c9q6c no longer exists
STEP: Creating a pod to test consume service account root CA
Feb  1 12:42:11.405: INFO: Waiting up to 5m0s for pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw" in namespace "e2e-tests-svcaccounts-cfl8v" to be "success or failure"
Feb  1 12:42:11.436: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 30.853131ms
Feb  1 12:42:13.726: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320735776s
Feb  1 12:42:15.759: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353385627s
Feb  1 12:42:18.372: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.966979162s
Feb  1 12:42:20.491: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.08542126s
Feb  1 12:42:22.515: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.11021899s
Feb  1 12:42:24.849: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.443934539s
Feb  1 12:42:26.952: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.546825508s
Feb  1 12:42:28.970: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.565042646s
STEP: Saw pod success
Feb  1 12:42:28.970: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw" satisfied condition "success or failure"
Feb  1 12:42:28.977: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw container root-ca-test: 
STEP: delete the pod
Feb  1 12:42:29.099: INFO: Waiting for pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw to disappear
Feb  1 12:42:29.115: INFO: Pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-tptnw no longer exists
STEP: Creating a pod to test consume service account namespace
Feb  1 12:42:29.140: INFO: Waiting up to 5m0s for pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh" in namespace "e2e-tests-svcaccounts-cfl8v" to be "success or failure"
Feb  1 12:42:29.147: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715344ms
Feb  1 12:42:31.159: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018836941s
Feb  1 12:42:33.178: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037504073s
Feb  1 12:42:35.520: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379715153s
Feb  1 12:42:37.550: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.410255846s
Feb  1 12:42:39.574: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.43429776s
Feb  1 12:42:41.592: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.452133288s
Feb  1 12:42:43.619: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.478455749s
Feb  1 12:42:45.638: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.498298118s
Feb  1 12:42:47.659: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.518825438s
STEP: Saw pod success
Feb  1 12:42:47.659: INFO: Pod "pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh" satisfied condition "success or failure"
Feb  1 12:42:47.666: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh container namespace-test: 
STEP: delete the pod
Feb  1 12:42:47.822: INFO: Waiting for pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh to disappear
Feb  1 12:42:47.898: INFO: Pod pod-service-account-386ae4d6-44f0-11ea-a88d-0242ac110005-smnlh no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:42:47.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-cfl8v" for this suite.
Feb  1 12:42:55.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:42:56.127: INFO: namespace: e2e-tests-svcaccounts-cfl8v, resource: bindings, ignored listing per whitelist
Feb  1 12:42:56.198: INFO: namespace e2e-tests-svcaccounts-cfl8v deletion completed in 8.284763298s

• [SLOW TEST:65.142 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:42:56.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb  1 12:43:06.716: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:43:48.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-r57k7" for this suite.
Feb  1 12:43:57.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:43:57.427: INFO: namespace: e2e-tests-namespaces-r57k7, resource: bindings, ignored listing per whitelist
Feb  1 12:43:57.460: INFO: namespace e2e-tests-namespaces-r57k7 deletion completed in 8.391151554s
STEP: Destroying namespace "e2e-tests-nsdeletetest-4pqkf" for this suite.
Feb  1 12:43:57.465: INFO: Namespace e2e-tests-nsdeletetest-4pqkf was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-fj9xj" for this suite.
Feb  1 12:44:03.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:44:03.681: INFO: namespace: e2e-tests-nsdeletetest-fj9xj, resource: bindings, ignored listing per whitelist
Feb  1 12:44:03.797: INFO: namespace e2e-tests-nsdeletetest-fj9xj deletion completed in 6.33181795s

• [SLOW TEST:67.598 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:44:03.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  1 12:44:14.659: INFO: Successfully updated pod "annotationupdate873dddd8-44f0-11ea-a88d-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:44:16.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w5v4s" for this suite.
Feb  1 12:44:40.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:44:40.971: INFO: namespace: e2e-tests-projected-w5v4s, resource: bindings, ignored listing per whitelist
Feb  1 12:44:41.008: INFO: namespace e2e-tests-projected-w5v4s deletion completed in 24.228104659s

• [SLOW TEST:37.212 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:44:41.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  1 12:44:52.088: INFO: Successfully updated pod "annotationupdate9d825168-44f0-11ea-a88d-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:44:54.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-965qf" for this suite.
Feb  1 12:45:18.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:45:18.285: INFO: namespace: e2e-tests-downward-api-965qf, resource: bindings, ignored listing per whitelist
Feb  1 12:45:18.456: INFO: namespace e2e-tests-downward-api-965qf deletion completed in 24.275989014s

• [SLOW TEST:37.447 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:45:18.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-49bkx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-49bkx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-49bkx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 237.212.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.212.237_udp@PTR;check="$$(dig +tcp +noall +answer +search 237.212.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.212.237_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-49bkx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-49bkx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-49bkx.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-49bkx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-49bkx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 237.212.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.212.237_udp@PTR;check="$$(dig +tcp +noall +answer +search 237.212.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.212.237_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  1 12:45:33.247: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.259: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.356: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-49bkx from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.375: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-49bkx from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.388: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.403: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.411: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.416: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.421: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.427: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.431: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.438: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.443: INFO: Unable to read 10.109.212.237_udp@PTR from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.447: INFO: Unable to read 10.109.212.237_tcp@PTR from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.455: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.459: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.464: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-49bkx from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.468: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-49bkx from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.476: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.483: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.488: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.495: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.499: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.502: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.505: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.508: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.511: INFO: Unable to read 10.109.212.237_udp@PTR from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.515: INFO: Unable to read 10.109.212.237_tcp@PTR from pod e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005: the server could not find the requested resource (get pods dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005)
Feb  1 12:45:33.515: INFO: Lookups using e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-49bkx wheezy_tcp@dns-test-service.e2e-tests-dns-49bkx wheezy_udp@dns-test-service.e2e-tests-dns-49bkx.svc wheezy_tcp@dns-test-service.e2e-tests-dns-49bkx.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.212.237_udp@PTR 10.109.212.237_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-49bkx jessie_tcp@dns-test-service.e2e-tests-dns-49bkx jessie_udp@dns-test-service.e2e-tests-dns-49bkx.svc jessie_tcp@dns-test-service.e2e-tests-dns-49bkx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-49bkx.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-49bkx.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.212.237_udp@PTR 10.109.212.237_tcp@PTR]

Feb  1 12:45:38.990: INFO: DNS probes using e2e-tests-dns-49bkx/dns-test-b3d9d48d-44f0-11ea-a88d-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:45:39.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-49bkx" for this suite.
Feb  1 12:45:47.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:45:47.662: INFO: namespace: e2e-tests-dns-49bkx, resource: bindings, ignored listing per whitelist
Feb  1 12:45:47.769: INFO: namespace e2e-tests-dns-49bkx deletion completed in 8.315035305s

• [SLOW TEST:29.313 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:45:47.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  1 12:45:48.006: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb  1 12:45:48.016: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-hkzcc/daemonsets","resourceVersion":"20196973"},"items":null}

Feb  1 12:45:48.020: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-hkzcc/pods","resourceVersion":"20196973"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:45:48.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-hkzcc" for this suite.
Feb  1 12:45:54.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:45:54.144: INFO: namespace: e2e-tests-daemonsets-hkzcc, resource: bindings, ignored listing per whitelist
Feb  1 12:45:54.197: INFO: namespace e2e-tests-daemonsets-hkzcc deletion completed in 6.162794149s

S [SKIPPING] [6.429 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb  1 12:45:48.006: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:45:54.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  1 12:45:54.401: INFO: Waiting up to 5m0s for pod "pod-c907f029-44f0-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-bgdq9" to be "success or failure"
Feb  1 12:45:54.410: INFO: Pod "pod-c907f029-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.933065ms
Feb  1 12:45:56.673: INFO: Pod "pod-c907f029-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271599508s
Feb  1 12:45:58.705: INFO: Pod "pod-c907f029-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303470929s
Feb  1 12:46:00.718: INFO: Pod "pod-c907f029-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317075149s
Feb  1 12:46:02.764: INFO: Pod "pod-c907f029-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.362733357s
Feb  1 12:46:04.784: INFO: Pod "pod-c907f029-44f0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.382562491s
STEP: Saw pod success
Feb  1 12:46:04.784: INFO: Pod "pod-c907f029-44f0-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:46:04.797: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c907f029-44f0-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 12:46:05.059: INFO: Waiting for pod pod-c907f029-44f0-11ea-a88d-0242ac110005 to disappear
Feb  1 12:46:05.065: INFO: Pod pod-c907f029-44f0-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:46:05.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bgdq9" for this suite.
Feb  1 12:46:11.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:46:11.187: INFO: namespace: e2e-tests-emptydir-bgdq9, resource: bindings, ignored listing per whitelist
Feb  1 12:46:11.252: INFO: namespace e2e-tests-emptydir-bgdq9 deletion completed in 6.175902385s

• [SLOW TEST:17.055 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:46:11.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  1 12:46:11.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xcnxk'
Feb  1 12:46:12.170: INFO: stderr: ""
Feb  1 12:46:12.170: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  1 12:46:13.583: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:13.584: INFO: Found 0 / 1
Feb  1 12:46:14.255: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:14.255: INFO: Found 0 / 1
Feb  1 12:46:15.180: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:15.180: INFO: Found 0 / 1
Feb  1 12:46:16.189: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:16.189: INFO: Found 0 / 1
Feb  1 12:46:17.420: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:17.420: INFO: Found 0 / 1
Feb  1 12:46:18.224: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:18.224: INFO: Found 0 / 1
Feb  1 12:46:19.182: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:19.182: INFO: Found 0 / 1
Feb  1 12:46:20.185: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:20.185: INFO: Found 0 / 1
Feb  1 12:46:21.184: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:21.184: INFO: Found 1 / 1
Feb  1 12:46:21.184: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  1 12:46:21.192: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:21.192: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  1 12:46:21.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-25z8q --namespace=e2e-tests-kubectl-xcnxk -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  1 12:46:21.398: INFO: stderr: ""
Feb  1 12:46:21.398: INFO: stdout: "pod/redis-master-25z8q patched\n"
STEP: checking annotations
Feb  1 12:46:21.409: INFO: Selector matched 1 pods for map[app:redis]
Feb  1 12:46:21.409: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:46:21.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xcnxk" for this suite.
Feb  1 12:46:45.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:46:45.579: INFO: namespace: e2e-tests-kubectl-xcnxk, resource: bindings, ignored listing per whitelist
Feb  1 12:46:45.677: INFO: namespace e2e-tests-kubectl-xcnxk deletion completed in 24.261429482s

• [SLOW TEST:34.424 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:46:45.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb  1 12:46:45.954: INFO: Waiting up to 5m0s for pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005" in namespace "e2e-tests-var-expansion-psd8r" to be "success or failure"
Feb  1 12:46:45.965: INFO: Pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.612821ms
Feb  1 12:46:47.979: INFO: Pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024735081s
Feb  1 12:46:50.028: INFO: Pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074208387s
Feb  1 12:46:52.056: INFO: Pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102442722s
Feb  1 12:46:54.076: INFO: Pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122514712s
Feb  1 12:46:56.092: INFO: Pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138232703s
STEP: Saw pod success
Feb  1 12:46:56.092: INFO: Pod "var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:46:56.098: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  1 12:46:57.222: INFO: Waiting for pod var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005 to disappear
Feb  1 12:46:57.249: INFO: Pod var-expansion-e7c168ae-44f0-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:46:57.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-psd8r" for this suite.
Feb  1 12:47:03.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:47:03.681: INFO: namespace: e2e-tests-var-expansion-psd8r, resource: bindings, ignored listing per whitelist
Feb  1 12:47:03.701: INFO: namespace e2e-tests-var-expansion-psd8r deletion completed in 6.445213615s

• [SLOW TEST:18.024 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:47:03.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-f291feca-44f0-11ea-a88d-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:47:16.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lj5qg" for this suite.
Feb  1 12:47:40.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:47:40.395: INFO: namespace: e2e-tests-configmap-lj5qg, resource: bindings, ignored listing per whitelist
Feb  1 12:47:40.427: INFO: namespace e2e-tests-configmap-lj5qg deletion completed in 24.215648247s

• [SLOW TEST:36.727 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:47:40.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  1 12:47:40.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-5qbq5" to be "success or failure"
Feb  1 12:47:40.880: INFO: Pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.736935ms
Feb  1 12:47:42.900: INFO: Pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041907244s
Feb  1 12:47:44.922: INFO: Pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064141402s
Feb  1 12:47:47.208: INFO: Pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349851033s
Feb  1 12:47:49.255: INFO: Pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.397148781s
Feb  1 12:47:51.268: INFO: Pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.410291388s
STEP: Saw pod success
Feb  1 12:47:51.268: INFO: Pod "downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:47:51.275: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005 container client-container: 
STEP: delete the pod
Feb  1 12:47:51.430: INFO: Waiting for pod downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:47:51.438: INFO: Pod downwardapi-volume-086d616b-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:47:51.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5qbq5" for this suite.
Feb  1 12:47:58.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:47:58.625: INFO: namespace: e2e-tests-projected-5qbq5, resource: bindings, ignored listing per whitelist
Feb  1 12:47:58.743: INFO: namespace e2e-tests-projected-5qbq5 deletion completed in 7.296380404s

• [SLOW TEST:18.316 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:47:58.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  1 12:47:58.876: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4v9sn,SelfLink:/api/v1/namespaces/e2e-tests-watch-4v9sn/configmaps/e2e-watch-test-watch-closed,UID:133a5ede-44f1-11ea-a994-fa163e34d433,ResourceVersion:20197273,Generation:0,CreationTimestamp:2020-02-01 12:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  1 12:47:58.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4v9sn,SelfLink:/api/v1/namespaces/e2e-tests-watch-4v9sn/configmaps/e2e-watch-test-watch-closed,UID:133a5ede-44f1-11ea-a994-fa163e34d433,ResourceVersion:20197274,Generation:0,CreationTimestamp:2020-02-01 12:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  1 12:47:58.967: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4v9sn,SelfLink:/api/v1/namespaces/e2e-tests-watch-4v9sn/configmaps/e2e-watch-test-watch-closed,UID:133a5ede-44f1-11ea-a994-fa163e34d433,ResourceVersion:20197275,Generation:0,CreationTimestamp:2020-02-01 12:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  1 12:47:58.967: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4v9sn,SelfLink:/api/v1/namespaces/e2e-tests-watch-4v9sn/configmaps/e2e-watch-test-watch-closed,UID:133a5ede-44f1-11ea-a994-fa163e34d433,ResourceVersion:20197276,Generation:0,CreationTimestamp:2020-02-01 12:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:47:58.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4v9sn" for this suite.
Feb  1 12:48:05.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:48:05.105: INFO: namespace: e2e-tests-watch-4v9sn, resource: bindings, ignored listing per whitelist
Feb  1 12:48:05.197: INFO: namespace e2e-tests-watch-4v9sn deletion completed in 6.223254914s

• [SLOW TEST:6.453 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:48:05.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  1 12:48:05.458: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  1 12:48:05.537: INFO: Waiting for terminating namespaces to be deleted...
Feb  1 12:48:05.542: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  1 12:48:05.558: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:48:05.558: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  1 12:48:05.558: INFO: 	Container coredns ready: true, restart count 0
Feb  1 12:48:05.558: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  1 12:48:05.558: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  1 12:48:05.558: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:48:05.558: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  1 12:48:05.558: INFO: 	Container weave ready: true, restart count 0
Feb  1 12:48:05.558: INFO: 	Container weave-npc ready: true, restart count 0
Feb  1 12:48:05.558: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  1 12:48:05.558: INFO: 	Container coredns ready: true, restart count 0
Feb  1 12:48:05.558: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:48:05.558: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb  1 12:48:05.674: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1749cff7-44f1-11ea-a88d-0242ac110005.15ef481d7d7ee39a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-85x2j/filler-pod-1749cff7-44f1-11ea-a88d-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1749cff7-44f1-11ea-a88d-0242ac110005.15ef481f0081ef8e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1749cff7-44f1-11ea-a88d-0242ac110005.15ef481fa076338d], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1749cff7-44f1-11ea-a88d-0242ac110005.15ef481fca4dc5c6], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ef48204ba3ceaa], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:48:18.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-85x2j" for this suite.
Feb  1 12:48:26.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:48:27.099: INFO: namespace: e2e-tests-sched-pred-85x2j, resource: bindings, ignored listing per whitelist
Feb  1 12:48:27.201: INFO: namespace e2e-tests-sched-pred-85x2j deletion completed in 8.303432531s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.003 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:48:27.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  1 12:48:27.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:28.416: INFO: stderr: ""
Feb  1 12:48:28.416: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  1 12:48:28.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:28.777: INFO: stderr: ""
Feb  1 12:48:28.777: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb  1 12:48:33.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:34.732: INFO: stderr: ""
Feb  1 12:48:34.733: INFO: stdout: "update-demo-nautilus-k7ms2 update-demo-nautilus-m5tmm "
Feb  1 12:48:34.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:35.630: INFO: stderr: ""
Feb  1 12:48:35.630: INFO: stdout: ""
Feb  1 12:48:35.630: INFO: update-demo-nautilus-k7ms2 is created but not running
Feb  1 12:48:40.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:40.788: INFO: stderr: ""
Feb  1 12:48:40.788: INFO: stdout: "update-demo-nautilus-k7ms2 update-demo-nautilus-m5tmm "
Feb  1 12:48:40.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:41.104: INFO: stderr: ""
Feb  1 12:48:41.104: INFO: stdout: "true"
Feb  1 12:48:41.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:41.226: INFO: stderr: ""
Feb  1 12:48:41.226: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 12:48:41.226: INFO: validating pod update-demo-nautilus-k7ms2
Feb  1 12:48:41.264: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 12:48:41.264: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 12:48:41.265: INFO: update-demo-nautilus-k7ms2 is verified up and running
Feb  1 12:48:41.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5tmm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:41.378: INFO: stderr: ""
Feb  1 12:48:41.379: INFO: stdout: "true"
Feb  1 12:48:41.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5tmm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:41.489: INFO: stderr: ""
Feb  1 12:48:41.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 12:48:41.489: INFO: validating pod update-demo-nautilus-m5tmm
Feb  1 12:48:41.509: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 12:48:41.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 12:48:41.509: INFO: update-demo-nautilus-m5tmm is verified up and running
STEP: scaling down the replication controller
Feb  1 12:48:41.513: INFO: scanned /root for discovery docs: 
Feb  1 12:48:41.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:43.426: INFO: stderr: ""
Feb  1 12:48:43.426: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  1 12:48:43.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:43.597: INFO: stderr: ""
Feb  1 12:48:43.597: INFO: stdout: "update-demo-nautilus-k7ms2 update-demo-nautilus-m5tmm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  1 12:48:48.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:48.792: INFO: stderr: ""
Feb  1 12:48:48.792: INFO: stdout: "update-demo-nautilus-k7ms2 update-demo-nautilus-m5tmm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  1 12:48:53.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:53.980: INFO: stderr: ""
Feb  1 12:48:53.980: INFO: stdout: "update-demo-nautilus-k7ms2 "
Feb  1 12:48:53.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:54.265: INFO: stderr: ""
Feb  1 12:48:54.265: INFO: stdout: "true"
Feb  1 12:48:54.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:54.420: INFO: stderr: ""
Feb  1 12:48:54.420: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 12:48:54.420: INFO: validating pod update-demo-nautilus-k7ms2
Feb  1 12:48:54.430: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 12:48:54.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 12:48:54.430: INFO: update-demo-nautilus-k7ms2 is verified up and running
STEP: scaling up the replication controller
Feb  1 12:48:54.434: INFO: scanned /root for discovery docs: 
Feb  1 12:48:54.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:56.637: INFO: stderr: ""
Feb  1 12:48:56.637: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  1 12:48:56.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:56.967: INFO: stderr: ""
Feb  1 12:48:56.967: INFO: stdout: "update-demo-nautilus-k7ms2 update-demo-nautilus-q6f5g "
Feb  1 12:48:56.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:57.687: INFO: stderr: ""
Feb  1 12:48:57.687: INFO: stdout: "true"
Feb  1 12:48:57.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:57.951: INFO: stderr: ""
Feb  1 12:48:57.951: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 12:48:57.951: INFO: validating pod update-demo-nautilus-k7ms2
Feb  1 12:48:57.981: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 12:48:57.981: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 12:48:57.981: INFO: update-demo-nautilus-k7ms2 is verified up and running
Feb  1 12:48:57.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q6f5g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:48:58.173: INFO: stderr: ""
Feb  1 12:48:58.173: INFO: stdout: ""
Feb  1 12:48:58.173: INFO: update-demo-nautilus-q6f5g is created but not running
Feb  1 12:49:03.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:03.436: INFO: stderr: ""
Feb  1 12:49:03.436: INFO: stdout: "update-demo-nautilus-k7ms2 update-demo-nautilus-q6f5g "
Feb  1 12:49:03.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:03.552: INFO: stderr: ""
Feb  1 12:49:03.552: INFO: stdout: "true"
Feb  1 12:49:03.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:03.670: INFO: stderr: ""
Feb  1 12:49:03.670: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 12:49:03.670: INFO: validating pod update-demo-nautilus-k7ms2
Feb  1 12:49:03.684: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 12:49:03.684: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 12:49:03.684: INFO: update-demo-nautilus-k7ms2 is verified up and running
Feb  1 12:49:03.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q6f5g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:03.827: INFO: stderr: ""
Feb  1 12:49:03.827: INFO: stdout: ""
Feb  1 12:49:03.827: INFO: update-demo-nautilus-q6f5g is created but not running
Feb  1 12:49:08.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:09.047: INFO: stderr: ""
Feb  1 12:49:09.047: INFO: stdout: "update-demo-nautilus-k7ms2 update-demo-nautilus-q6f5g "
Feb  1 12:49:09.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:09.169: INFO: stderr: ""
Feb  1 12:49:09.169: INFO: stdout: "true"
Feb  1 12:49:09.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ms2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:09.313: INFO: stderr: ""
Feb  1 12:49:09.313: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 12:49:09.314: INFO: validating pod update-demo-nautilus-k7ms2
Feb  1 12:49:09.329: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 12:49:09.329: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 12:49:09.329: INFO: update-demo-nautilus-k7ms2 is verified up and running
Feb  1 12:49:09.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q6f5g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:09.463: INFO: stderr: ""
Feb  1 12:49:09.463: INFO: stdout: "true"
Feb  1 12:49:09.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q6f5g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:09.617: INFO: stderr: ""
Feb  1 12:49:09.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  1 12:49:09.617: INFO: validating pod update-demo-nautilus-q6f5g
Feb  1 12:49:09.627: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  1 12:49:09.627: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  1 12:49:09.627: INFO: update-demo-nautilus-q6f5g is verified up and running
STEP: using delete to clean up resources
Feb  1 12:49:09.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:09.772: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  1 12:49:09.772: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  1 12:49:09.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-5b64k'
Feb  1 12:49:09.931: INFO: stderr: "No resources found.\n"
Feb  1 12:49:09.931: INFO: stdout: ""
Feb  1 12:49:09.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-5b64k -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  1 12:49:10.155: INFO: stderr: ""
Feb  1 12:49:10.156: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:49:10.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5b64k" for this suite.
Feb  1 12:49:34.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:49:34.251: INFO: namespace: e2e-tests-kubectl-5b64k, resource: bindings, ignored listing per whitelist
Feb  1 12:49:34.329: INFO: namespace e2e-tests-kubectl-5b64k deletion completed in 24.153347494s

• [SLOW TEST:67.128 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:49:34.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-hwpn
STEP: Creating a pod to test atomic-volume-subpath
Feb  1 12:49:34.716: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hwpn" in namespace "e2e-tests-subpath-d27dm" to be "success or failure"
Feb  1 12:49:34.757: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 41.47176ms
Feb  1 12:49:36.772: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056885286s
Feb  1 12:49:38.795: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079095363s
Feb  1 12:49:40.802: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086767699s
Feb  1 12:49:42.814: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098875279s
Feb  1 12:49:44.827: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.111003704s
Feb  1 12:49:46.862: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.146256398s
Feb  1 12:49:48.888: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.172433483s
Feb  1 12:49:50.904: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 16.1885252s
Feb  1 12:49:52.920: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 18.204241535s
Feb  1 12:49:54.939: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 20.223558907s
Feb  1 12:49:56.958: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 22.241985571s
Feb  1 12:49:59.025: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 24.309257757s
Feb  1 12:50:01.035: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 26.319252467s
Feb  1 12:50:03.050: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 28.334215419s
Feb  1 12:50:05.063: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 30.347085641s
Feb  1 12:50:07.626: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Running", Reason="", readiness=false. Elapsed: 32.910781716s
Feb  1 12:50:10.066: INFO: Pod "pod-subpath-test-secret-hwpn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.350073533s
STEP: Saw pod success
Feb  1 12:50:10.066: INFO: Pod "pod-subpath-test-secret-hwpn" satisfied condition "success or failure"
Feb  1 12:50:10.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-hwpn container test-container-subpath-secret-hwpn: 
STEP: delete the pod
Feb  1 12:50:10.553: INFO: Waiting for pod pod-subpath-test-secret-hwpn to disappear
Feb  1 12:50:10.581: INFO: Pod pod-subpath-test-secret-hwpn no longer exists
STEP: Deleting pod pod-subpath-test-secret-hwpn
Feb  1 12:50:10.581: INFO: Deleting pod "pod-subpath-test-secret-hwpn" in namespace "e2e-tests-subpath-d27dm"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:50:10.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-d27dm" for this suite.
Feb  1 12:50:18.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:50:18.899: INFO: namespace: e2e-tests-subpath-d27dm, resource: bindings, ignored listing per whitelist
Feb  1 12:50:18.934: INFO: namespace e2e-tests-subpath-d27dm deletion completed in 8.248915445s

• [SLOW TEST:44.605 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:50:18.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  1 12:50:19.099: INFO: Waiting up to 5m0s for pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-5vp44" to be "success or failure"
Feb  1 12:50:19.107: INFO: Pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.179367ms
Feb  1 12:50:21.293: INFO: Pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193860398s
Feb  1 12:50:23.303: INFO: Pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203700472s
Feb  1 12:50:25.536: INFO: Pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43617356s
Feb  1 12:50:27.566: INFO: Pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.466353113s
Feb  1 12:50:29.579: INFO: Pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.479723415s
STEP: Saw pod success
Feb  1 12:50:29.579: INFO: Pod "pod-66cf84b3-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:50:29.583: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-66cf84b3-44f1-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 12:50:30.462: INFO: Waiting for pod pod-66cf84b3-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:50:30.684: INFO: Pod pod-66cf84b3-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:50:30.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5vp44" for this suite.
Feb  1 12:50:36.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:50:36.865: INFO: namespace: e2e-tests-emptydir-5vp44, resource: bindings, ignored listing per whitelist
Feb  1 12:50:36.881: INFO: namespace e2e-tests-emptydir-5vp44 deletion completed in 6.184244993s

• [SLOW TEST:17.947 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:50:36.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  1 12:50:37.262: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-x7d5z" to be "success or failure"
Feb  1 12:50:37.278: INFO: Pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.912516ms
Feb  1 12:50:39.295: INFO: Pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033324868s
Feb  1 12:50:41.309: INFO: Pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047068157s
Feb  1 12:50:43.862: INFO: Pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599835412s
Feb  1 12:50:45.881: INFO: Pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.618811049s
Feb  1 12:50:47.895: INFO: Pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633636597s
STEP: Saw pod success
Feb  1 12:50:47.896: INFO: Pod "downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:50:47.902: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005 container client-container: 
STEP: delete the pod
Feb  1 12:50:49.562: INFO: Waiting for pod downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:50:49.581: INFO: Pod downwardapi-volume-7193838d-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:50:49.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x7d5z" for this suite.
Feb  1 12:50:55.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:50:55.690: INFO: namespace: e2e-tests-downward-api-x7d5z, resource: bindings, ignored listing per whitelist
Feb  1 12:50:55.796: INFO: namespace e2e-tests-downward-api-x7d5z deletion completed in 6.201749821s

• [SLOW TEST:18.915 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:50:55.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-7ccf59bb-44f1-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  1 12:50:56.018: INFO: Waiting up to 5m0s for pod "pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-h87f2" to be "success or failure"
Feb  1 12:50:56.030: INFO: Pod "pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.229439ms
Feb  1 12:50:58.043: INFO: Pod "pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025411031s
Feb  1 12:51:00.741: INFO: Pod "pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723162031s
Feb  1 12:51:02.753: INFO: Pod "pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.735351565s
Feb  1 12:51:04.782: INFO: Pod "pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.764369375s
STEP: Saw pod success
Feb  1 12:51:04.782: INFO: Pod "pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:51:04.811: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  1 12:51:04.920: INFO: Waiting for pod pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:51:04.927: INFO: Pod pod-configmaps-7cd044d1-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:51:04.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h87f2" for this suite.
Feb  1 12:51:11.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:51:11.195: INFO: namespace: e2e-tests-configmap-h87f2, resource: bindings, ignored listing per whitelist
Feb  1 12:51:11.227: INFO: namespace e2e-tests-configmap-h87f2 deletion completed in 6.290615009s

• [SLOW TEST:15.430 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:51:11.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  1 12:51:11.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  1 12:51:11.636: INFO: stderr: ""
Feb  1 12:51:11.636: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:51:11.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qtxks" for this suite.
Feb  1 12:51:17.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:51:17.832: INFO: namespace: e2e-tests-kubectl-qtxks, resource: bindings, ignored listing per whitelist
Feb  1 12:51:17.970: INFO: namespace e2e-tests-kubectl-qtxks deletion completed in 6.314009305s

• [SLOW TEST:6.743 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:51:17.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:51:30.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-z2x8w" for this suite.
Feb  1 12:51:36.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:51:37.132: INFO: namespace: e2e-tests-kubelet-test-z2x8w, resource: bindings, ignored listing per whitelist
Feb  1 12:51:37.192: INFO: namespace e2e-tests-kubelet-test-z2x8w deletion completed in 6.486921609s

• [SLOW TEST:19.221 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:51:37.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  1 12:51:37.505: INFO: Waiting up to 5m0s for pod "pod-95872a57-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-r5mpm" to be "success or failure"
Feb  1 12:51:37.612: INFO: Pod "pod-95872a57-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 107.379915ms
Feb  1 12:51:40.004: INFO: Pod "pod-95872a57-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499504046s
Feb  1 12:51:42.031: INFO: Pod "pod-95872a57-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52672039s
Feb  1 12:51:44.052: INFO: Pod "pod-95872a57-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.547661053s
Feb  1 12:51:46.073: INFO: Pod "pod-95872a57-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568204474s
Feb  1 12:51:48.110: INFO: Pod "pod-95872a57-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.605409643s
STEP: Saw pod success
Feb  1 12:51:48.110: INFO: Pod "pod-95872a57-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:51:48.119: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-95872a57-44f1-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 12:51:49.291: INFO: Waiting for pod pod-95872a57-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:51:49.513: INFO: Pod pod-95872a57-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:51:49.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r5mpm" for this suite.
Feb  1 12:51:55.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:51:55.641: INFO: namespace: e2e-tests-emptydir-r5mpm, resource: bindings, ignored listing per whitelist
Feb  1 12:51:55.947: INFO: namespace e2e-tests-emptydir-r5mpm deletion completed in 6.394667877s

• [SLOW TEST:18.755 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:51:55.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0201 12:52:06.630982       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  1 12:52:06.631: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:52:06.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fxxgt" for this suite.
Feb  1 12:52:12.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:52:12.889: INFO: namespace: e2e-tests-gc-fxxgt, resource: bindings, ignored listing per whitelist
Feb  1 12:52:12.972: INFO: namespace e2e-tests-gc-fxxgt deletion completed in 6.326783856s

• [SLOW TEST:17.024 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:52:12.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  1 12:52:13.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-downward-api-6v62k" to be "success or failure"
Feb  1 12:52:13.184: INFO: Pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.58428ms
Feb  1 12:52:15.454: INFO: Pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297292581s
Feb  1 12:52:17.481: INFO: Pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323866827s
Feb  1 12:52:19.519: INFO: Pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361475283s
Feb  1 12:52:21.546: INFO: Pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389275424s
Feb  1 12:52:23.584: INFO: Pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.427083921s
STEP: Saw pod success
Feb  1 12:52:23.584: INFO: Pod "downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:52:23.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005 container client-container: 
STEP: delete the pod
Feb  1 12:52:23.902: INFO: Waiting for pod downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:52:24.053: INFO: Pod downwardapi-volume-aaca31ed-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:52:24.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6v62k" for this suite.
Feb  1 12:52:30.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:52:30.193: INFO: namespace: e2e-tests-downward-api-6v62k, resource: bindings, ignored listing per whitelist
Feb  1 12:52:30.229: INFO: namespace e2e-tests-downward-api-6v62k deletion completed in 6.165146444s

• [SLOW TEST:17.257 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:52:30.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b5139e65-44f1-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:52:30.461: INFO: Waiting up to 5m0s for pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-bmw76" to be "success or failure"
Feb  1 12:52:30.502: INFO: Pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.368193ms
Feb  1 12:52:32.533: INFO: Pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071851565s
Feb  1 12:52:34.566: INFO: Pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105014631s
Feb  1 12:52:36.671: INFO: Pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210265965s
Feb  1 12:52:38.709: INFO: Pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.248221825s
Feb  1 12:52:40.733: INFO: Pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.272530421s
STEP: Saw pod success
Feb  1 12:52:40.734: INFO: Pod "pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:52:40.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  1 12:52:41.125: INFO: Waiting for pod pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:52:41.143: INFO: Pod pod-secrets-b5143f69-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:52:41.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bmw76" for this suite.
Feb  1 12:52:47.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:52:47.377: INFO: namespace: e2e-tests-secrets-bmw76, resource: bindings, ignored listing per whitelist
Feb  1 12:52:47.421: INFO: namespace e2e-tests-secrets-bmw76 deletion completed in 6.258591489s

• [SLOW TEST:17.192 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:52:47.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-bf4c220e-44f1-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  1 12:52:47.564: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-b8qmx" to be "success or failure"
Feb  1 12:52:47.619: INFO: Pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 54.544771ms
Feb  1 12:52:49.624: INFO: Pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060447202s
Feb  1 12:52:51.649: INFO: Pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085373383s
Feb  1 12:52:53.670: INFO: Pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106017951s
Feb  1 12:52:55.683: INFO: Pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11857811s
Feb  1 12:52:57.697: INFO: Pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132976112s
STEP: Saw pod success
Feb  1 12:52:57.697: INFO: Pod "pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:52:57.701: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  1 12:52:57.909: INFO: Waiting for pod pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:52:58.919: INFO: Pod pod-projected-configmaps-bf4ce58f-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:52:58.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b8qmx" for this suite.
Feb  1 12:53:05.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:53:05.333: INFO: namespace: e2e-tests-projected-b8qmx, resource: bindings, ignored listing per whitelist
Feb  1 12:53:05.351: INFO: namespace e2e-tests-projected-b8qmx deletion completed in 6.422293256s

• [SLOW TEST:17.930 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:53:05.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb  1 12:53:05.560: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix020885466/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:53:05.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hrcrg" for this suite.
Feb  1 12:53:11.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:53:11.804: INFO: namespace: e2e-tests-kubectl-hrcrg, resource: bindings, ignored listing per whitelist
Feb  1 12:53:11.943: INFO: namespace e2e-tests-kubectl-hrcrg deletion completed in 6.248298895s

• [SLOW TEST:6.592 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:53:11.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  1 12:53:12.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-projected-mgbqk" to be "success or failure"
Feb  1 12:53:12.217: INFO: Pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.472028ms
Feb  1 12:53:14.225: INFO: Pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055577492s
Feb  1 12:53:16.292: INFO: Pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12295092s
Feb  1 12:53:18.616: INFO: Pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446568829s
Feb  1 12:53:20.643: INFO: Pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.473658857s
Feb  1 12:53:22.688: INFO: Pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.518657062s
STEP: Saw pod success
Feb  1 12:53:22.688: INFO: Pod "downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:53:22.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005 container client-container: 
STEP: delete the pod
Feb  1 12:53:23.723: INFO: Waiting for pod downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:53:24.047: INFO: Pod downwardapi-volume-cdeb6fa8-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:53:24.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mgbqk" for this suite.
Feb  1 12:53:30.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:53:30.240: INFO: namespace: e2e-tests-projected-mgbqk, resource: bindings, ignored listing per whitelist
Feb  1 12:53:30.258: INFO: namespace e2e-tests-projected-mgbqk deletion completed in 6.180470004s

• [SLOW TEST:18.314 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:53:30.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d8da5504-44f1-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  1 12:53:30.457: INFO: Waiting up to 5m0s for pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005" in namespace "e2e-tests-secrets-5nklv" to be "success or failure"
Feb  1 12:53:30.479: INFO: Pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.784774ms
Feb  1 12:53:32.999: INFO: Pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.541488492s
Feb  1 12:53:35.013: INFO: Pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556186747s
Feb  1 12:53:37.024: INFO: Pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566420899s
Feb  1 12:53:39.037: INFO: Pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580138033s
Feb  1 12:53:41.624: INFO: Pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.167093368s
STEP: Saw pod success
Feb  1 12:53:41.624: INFO: Pod "pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:53:41.649: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005 container secret-env-test: 
STEP: delete the pod
Feb  1 12:53:42.104: INFO: Waiting for pod pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005 to disappear
Feb  1 12:53:42.119: INFO: Pod pod-secrets-d8ddd0bb-44f1-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:53:42.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5nklv" for this suite.
Feb  1 12:53:48.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:53:48.320: INFO: namespace: e2e-tests-secrets-5nklv, resource: bindings, ignored listing per whitelist
Feb  1 12:53:48.422: INFO: namespace e2e-tests-secrets-5nklv deletion completed in 6.28711647s

• [SLOW TEST:18.164 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:53:48.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mdw59
Feb  1 12:53:58.834: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mdw59
STEP: checking the pod's current state and verifying that restartCount is present
Feb  1 12:53:58.885: INFO: Initial restart count of pod liveness-http is 0
Feb  1 12:54:23.181: INFO: Restart count of pod e2e-tests-container-probe-mdw59/liveness-http is now 1 (24.295281294s elapsed)
Feb  1 12:54:44.020: INFO: Restart count of pod e2e-tests-container-probe-mdw59/liveness-http is now 2 (45.134584245s elapsed)
Feb  1 12:55:08.208: INFO: Restart count of pod e2e-tests-container-probe-mdw59/liveness-http is now 3 (1m9.322912793s elapsed)
Feb  1 12:55:24.582: INFO: Restart count of pod e2e-tests-container-probe-mdw59/liveness-http is now 4 (1m25.696809263s elapsed)
Feb  1 12:55:41.433: INFO: Restart count of pod e2e-tests-container-probe-mdw59/liveness-http is now 5 (1m42.547358055s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:55:41.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mdw59" for this suite.
Feb  1 12:55:49.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:55:49.719: INFO: namespace: e2e-tests-container-probe-mdw59, resource: bindings, ignored listing per whitelist
Feb  1 12:55:50.019: INFO: namespace e2e-tests-container-probe-mdw59 deletion completed in 8.496244207s

• [SLOW TEST:121.597 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:55:50.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb  1 12:56:00.762: INFO: Pod pod-hostip-2c59bd42-44f2-11ea-a88d-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:56:00.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2vdxp" for this suite.
Feb  1 12:56:24.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:56:24.866: INFO: namespace: e2e-tests-pods-2vdxp, resource: bindings, ignored listing per whitelist
Feb  1 12:56:24.938: INFO: namespace e2e-tests-pods-2vdxp deletion completed in 24.163517344s

• [SLOW TEST:34.918 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:56:24.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  1 12:56:25.247: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  1 12:56:25.267: INFO: Waiting for terminating namespaces to be deleted...
Feb  1 12:56:25.272: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  1 12:56:25.289: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:56:25.289: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  1 12:56:25.290: INFO: 	Container coredns ready: true, restart count 0
Feb  1 12:56:25.290: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  1 12:56:25.290: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  1 12:56:25.290: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:56:25.290: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  1 12:56:25.290: INFO: 	Container weave ready: true, restart count 0
Feb  1 12:56:25.290: INFO: 	Container weave-npc ready: true, restart count 0
Feb  1 12:56:25.290: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  1 12:56:25.290: INFO: 	Container coredns ready: true, restart count 0
Feb  1 12:56:25.290: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  1 12:56:25.290: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ef4891d27a618a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:56:26.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-tlwvg" for this suite.
Feb  1 12:56:32.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:56:32.870: INFO: namespace: e2e-tests-sched-pred-tlwvg, resource: bindings, ignored listing per whitelist
Feb  1 12:56:32.939: INFO: namespace e2e-tests-sched-pred-tlwvg deletion completed in 6.471569286s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:8.001 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:56:32.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0201 12:57:25.266996       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  1 12:57:25.267: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:57:25.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-czgfp" for this suite.
Feb  1 12:57:38.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:57:42.133: INFO: namespace: e2e-tests-gc-czgfp, resource: bindings, ignored listing per whitelist
Feb  1 12:57:42.171: INFO: namespace e2e-tests-gc-czgfp deletion completed in 16.419753104s

• [SLOW TEST:69.232 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:57:42.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-wsfn
STEP: Creating a pod to test atomic-volume-subpath
Feb  1 12:57:47.106: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wsfn" in namespace "e2e-tests-subpath-k5xgz" to be "success or failure"
Feb  1 12:57:48.278: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 1.1724576s
Feb  1 12:57:51.631: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525313843s
Feb  1 12:57:54.278: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 7.172040853s
Feb  1 12:57:56.466: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 9.36066181s
Feb  1 12:57:58.734: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 11.62791005s
Feb  1 12:58:01.601: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.495197369s
Feb  1 12:58:04.097: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.991332362s
Feb  1 12:58:06.125: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.019182588s
Feb  1 12:58:08.138: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 21.032610331s
Feb  1 12:58:11.653: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 24.547488033s
Feb  1 12:58:13.666: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 26.560587524s
Feb  1 12:58:15.919: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 28.81288083s
Feb  1 12:58:17.931: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 30.824890608s
Feb  1 12:58:19.939: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 32.833204557s
Feb  1 12:58:22.392: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Pending", Reason="", readiness=false. Elapsed: 35.286649997s
Feb  1 12:58:24.417: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 37.311221027s
Feb  1 12:58:26.430: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 39.324006778s
Feb  1 12:58:28.450: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 41.344662686s
Feb  1 12:58:30.475: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 43.369728884s
Feb  1 12:58:32.523: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 45.417209027s
Feb  1 12:58:34.562: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 47.456158214s
Feb  1 12:58:36.596: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 49.490229902s
Feb  1 12:58:38.610: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 51.504130609s
Feb  1 12:58:40.646: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Running", Reason="", readiness=false. Elapsed: 53.540188311s
Feb  1 12:58:42.697: INFO: Pod "pod-subpath-test-projected-wsfn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 55.591087163s
STEP: Saw pod success
Feb  1 12:58:42.697: INFO: Pod "pod-subpath-test-projected-wsfn" satisfied condition "success or failure"
Feb  1 12:58:42.719: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-wsfn container test-container-subpath-projected-wsfn: 
STEP: delete the pod
Feb  1 12:58:43.123: INFO: Waiting for pod pod-subpath-test-projected-wsfn to disappear
Feb  1 12:58:43.267: INFO: Pod pod-subpath-test-projected-wsfn no longer exists
STEP: Deleting pod pod-subpath-test-projected-wsfn
Feb  1 12:58:43.268: INFO: Deleting pod "pod-subpath-test-projected-wsfn" in namespace "e2e-tests-subpath-k5xgz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:58:43.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-k5xgz" for this suite.
Feb  1 12:58:51.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:58:51.399: INFO: namespace: e2e-tests-subpath-k5xgz, resource: bindings, ignored listing per whitelist
Feb  1 12:58:51.669: INFO: namespace e2e-tests-subpath-k5xgz deletion completed in 8.367212788s

• [SLOW TEST:69.498 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:58:51.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  1 12:58:51.980: INFO: Waiting up to 5m0s for pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-llnxx" to be "success or failure"
Feb  1 12:58:51.993: INFO: Pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.029523ms
Feb  1 12:58:54.285: INFO: Pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305397784s
Feb  1 12:58:56.316: INFO: Pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33610181s
Feb  1 12:58:58.517: INFO: Pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537489678s
Feb  1 12:59:00.542: INFO: Pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562006302s
Feb  1 12:59:02.588: INFO: Pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.608396004s
STEP: Saw pod success
Feb  1 12:59:02.589: INFO: Pod "pod-9881b0e0-44f2-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:59:02.614: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9881b0e0-44f2-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 12:59:02.797: INFO: Waiting for pod pod-9881b0e0-44f2-11ea-a88d-0242ac110005 to disappear
Feb  1 12:59:02.808: INFO: Pod pod-9881b0e0-44f2-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:59:02.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-llnxx" for this suite.
Feb  1 12:59:08.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:59:09.069: INFO: namespace: e2e-tests-emptydir-llnxx, resource: bindings, ignored listing per whitelist
Feb  1 12:59:09.087: INFO: namespace e2e-tests-emptydir-llnxx deletion completed in 6.268616088s

• [SLOW TEST:17.418 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:59:09.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a2d71124-44f2-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  1 12:59:09.360: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-cv77g" to be "success or failure"
Feb  1 12:59:09.419: INFO: Pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.827908ms
Feb  1 12:59:11.437: INFO: Pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076243835s
Feb  1 12:59:13.450: INFO: Pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089428565s
Feb  1 12:59:15.810: INFO: Pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449857864s
Feb  1 12:59:17.988: INFO: Pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.62799059s
Feb  1 12:59:20.101: INFO: Pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.740642799s
STEP: Saw pod success
Feb  1 12:59:20.101: INFO: Pod "pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:59:20.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  1 12:59:20.284: INFO: Waiting for pod pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005 to disappear
Feb  1 12:59:20.357: INFO: Pod pod-configmaps-a2db0023-44f2-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:59:20.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cv77g" for this suite.
Feb  1 12:59:26.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:59:26.430: INFO: namespace: e2e-tests-configmap-cv77g, resource: bindings, ignored listing per whitelist
Feb  1 12:59:26.619: INFO: namespace e2e-tests-configmap-cv77g deletion completed in 6.248276823s

• [SLOW TEST:17.532 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:59:26.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb  1 12:59:26.823: INFO: Waiting up to 5m0s for pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005" in namespace "e2e-tests-var-expansion-jqj7g" to be "success or failure"
Feb  1 12:59:26.868: INFO: Pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.900526ms
Feb  1 12:59:29.007: INFO: Pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184017908s
Feb  1 12:59:31.054: INFO: Pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231421781s
Feb  1 12:59:35.199: INFO: Pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.375900814s
Feb  1 12:59:37.209: INFO: Pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.386322681s
Feb  1 12:59:39.218: INFO: Pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.395203901s
STEP: Saw pod success
Feb  1 12:59:39.218: INFO: Pod "var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 12:59:39.220: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  1 12:59:39.350: INFO: Waiting for pod var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005 to disappear
Feb  1 12:59:39.370: INFO: Pod var-expansion-ad4754a2-44f2-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 12:59:39.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-jqj7g" for this suite.
Feb  1 12:59:47.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 12:59:47.747: INFO: namespace: e2e-tests-var-expansion-jqj7g, resource: bindings, ignored listing per whitelist
Feb  1 12:59:47.772: INFO: namespace e2e-tests-var-expansion-jqj7g deletion completed in 8.394301944s

• [SLOW TEST:21.153 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 12:59:47.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  1 12:59:48.023: INFO: PodSpec: initContainers in spec.initContainers
Feb  1 13:01:03.765: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b9ebcc26-44f2-11ea-a88d-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-bcfft", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-bcfft/pods/pod-init-b9ebcc26-44f2-11ea-a88d-0242ac110005", UID:"b9ecc819-44f2-11ea-a994-fa163e34d433", ResourceVersion:"20198987", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716158788, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"23415616", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lk9bb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00276c080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lk9bb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lk9bb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lk9bb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b182c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002784000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b18340)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b18360)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b18368), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b1836c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158788, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158788, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158788, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716158788, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0022fc120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b7e070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b7e0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d3c8973f47d8f6db106b964c9da6652fbb80b5e0fe879aaec2ec931685d6e7d2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022fc5a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022fc3a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 13:01:03.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-bcfft" for this suite.
Feb  1 13:01:27.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:01:27.977: INFO: namespace: e2e-tests-init-container-bcfft, resource: bindings, ignored listing per whitelist
Feb  1 13:01:28.067: INFO: namespace e2e-tests-init-container-bcfft deletion completed in 24.274627949s

• [SLOW TEST:100.295 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 13:01:28.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-f5d416ba-44f2-11ea-a88d-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  1 13:01:28.717: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005" in namespace "e2e-tests-configmap-7bn5q" to be "success or failure"
Feb  1 13:01:28.769: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.131427ms
Feb  1 13:01:30.924: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206235493s
Feb  1 13:01:32.936: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218582575s
Feb  1 13:01:35.720: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.002456612s
Feb  1 13:01:37.738: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.020309013s
Feb  1 13:01:39.754: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.036287457s
Feb  1 13:01:41.777: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.060042376s
STEP: Saw pod success
Feb  1 13:01:41.777: INFO: Pod "pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 13:01:41.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  1 13:01:42.652: INFO: Waiting for pod pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005 to disappear
Feb  1 13:01:42.933: INFO: Pod pod-configmaps-f5dbc9c1-44f2-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 13:01:42.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7bn5q" for this suite.
Feb  1 13:01:51.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:01:51.158: INFO: namespace: e2e-tests-configmap-7bn5q, resource: bindings, ignored listing per whitelist
Feb  1 13:01:51.165: INFO: namespace e2e-tests-configmap-7bn5q deletion completed in 8.215028732s

• [SLOW TEST:23.097 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 13:01:51.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  1 13:01:53.001: INFO: Pod name wrapped-volume-race-045cde93-44f3-11ea-a88d-0242ac110005: Found 0 pods out of 5
Feb  1 13:01:58.048: INFO: Pod name wrapped-volume-race-045cde93-44f3-11ea-a88d-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-045cde93-44f3-11ea-a88d-0242ac110005 in namespace e2e-tests-emptydir-wrapper-sgwvr, will wait for the garbage collector to delete the pods
Feb  1 13:03:40.414: INFO: Deleting ReplicationController wrapped-volume-race-045cde93-44f3-11ea-a88d-0242ac110005 took: 29.195934ms
Feb  1 13:03:41.214: INFO: Terminating ReplicationController wrapped-volume-race-045cde93-44f3-11ea-a88d-0242ac110005 pods took: 800.546379ms
STEP: Creating RC which spawns configmap-volume pods
Feb  1 13:04:34.163: INFO: Pod name wrapped-volume-race-64537017-44f3-11ea-a88d-0242ac110005: Found 0 pods out of 5
Feb  1 13:04:39.181: INFO: Pod name wrapped-volume-race-64537017-44f3-11ea-a88d-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-64537017-44f3-11ea-a88d-0242ac110005 in namespace e2e-tests-emptydir-wrapper-sgwvr, will wait for the garbage collector to delete the pods
Feb  1 13:06:33.344: INFO: Deleting ReplicationController wrapped-volume-race-64537017-44f3-11ea-a88d-0242ac110005 took: 29.699444ms
Feb  1 13:06:33.844: INFO: Terminating ReplicationController wrapped-volume-race-64537017-44f3-11ea-a88d-0242ac110005 pods took: 500.504027ms
STEP: Creating RC which spawns configmap-volume pods
Feb  1 13:07:23.027: INFO: Pod name wrapped-volume-race-c9041d3b-44f3-11ea-a88d-0242ac110005: Found 0 pods out of 5
Feb  1 13:07:28.048: INFO: Pod name wrapped-volume-race-c9041d3b-44f3-11ea-a88d-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c9041d3b-44f3-11ea-a88d-0242ac110005 in namespace e2e-tests-emptydir-wrapper-sgwvr, will wait for the garbage collector to delete the pods
Feb  1 13:09:32.220: INFO: Deleting ReplicationController wrapped-volume-race-c9041d3b-44f3-11ea-a88d-0242ac110005 took: 43.214431ms
Feb  1 13:09:32.520: INFO: Terminating ReplicationController wrapped-volume-race-c9041d3b-44f3-11ea-a88d-0242ac110005 pods took: 300.368746ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 13:10:25.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-sgwvr" for this suite.
Feb  1 13:10:38.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:10:38.154: INFO: namespace: e2e-tests-emptydir-wrapper-sgwvr, resource: bindings, ignored listing per whitelist
Feb  1 13:10:38.156: INFO: namespace e2e-tests-emptydir-wrapper-sgwvr deletion completed in 12.223402339s

• [SLOW TEST:526.991 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 13:10:38.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb  1 13:10:38.341: INFO: Waiting up to 5m0s for pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005" in namespace "e2e-tests-containers-sbbkk" to be "success or failure"
Feb  1 13:10:38.355: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.129245ms
Feb  1 13:10:40.994: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.653249502s
Feb  1 13:10:44.492: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150814964s
Feb  1 13:10:46.512: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171127714s
Feb  1 13:10:48.535: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19430106s
Feb  1 13:10:50.564: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223001337s
Feb  1 13:10:52.592: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.250962237s
Feb  1 13:10:54.617: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.276390006s
STEP: Saw pod success
Feb  1 13:10:54.617: INFO: Pod "client-containers-3d881570-44f4-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 13:10:54.625: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3d881570-44f4-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 13:10:54.742: INFO: Waiting for pod client-containers-3d881570-44f4-11ea-a88d-0242ac110005 to disappear
Feb  1 13:10:54.751: INFO: Pod client-containers-3d881570-44f4-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 13:10:54.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-sbbkk" for this suite.
Feb  1 13:11:00.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:11:01.000: INFO: namespace: e2e-tests-containers-sbbkk, resource: bindings, ignored listing per whitelist
Feb  1 13:11:01.052: INFO: namespace e2e-tests-containers-sbbkk deletion completed in 6.290900572s

• [SLOW TEST:22.896 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 13:11:01.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  1 13:11:10.339: INFO: 10 pods remaining
Feb  1 13:11:10.339: INFO: 10 pods has nil DeletionTimestamp
Feb  1 13:11:10.339: INFO: 
Feb  1 13:11:11.741: INFO: 10 pods remaining
Feb  1 13:11:11.741: INFO: 6 pods has nil DeletionTimestamp
Feb  1 13:11:11.741: INFO: 
Feb  1 13:11:12.539: INFO: 0 pods remaining
Feb  1 13:11:12.539: INFO: 0 pods has nil DeletionTimestamp
Feb  1 13:11:12.539: INFO: 
STEP: Gathering metrics
W0201 13:11:13.041693       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  1 13:11:13.041: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 13:11:13.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-284gc" for this suite.
Feb  1 13:11:33.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:11:33.302: INFO: namespace: e2e-tests-gc-284gc, resource: bindings, ignored listing per whitelist
Feb  1 13:11:33.339: INFO: namespace e2e-tests-gc-284gc deletion completed in 20.294078662s

• [SLOW TEST:32.287 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  1 13:11:33.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  1 13:11:33.580: INFO: Waiting up to 5m0s for pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005" in namespace "e2e-tests-emptydir-72j8m" to be "success or failure"
Feb  1 13:11:33.608: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.055801ms
Feb  1 13:11:36.047: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467074336s
Feb  1 13:11:38.094: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.513367169s
Feb  1 13:11:41.230: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.6497484s
Feb  1 13:11:43.250: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.670063457s
Feb  1 13:11:45.284: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.703594282s
Feb  1 13:11:47.299: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.719079391s
STEP: Saw pod success
Feb  1 13:11:47.299: INFO: Pod "pod-5e73c6f9-44f4-11ea-a88d-0242ac110005" satisfied condition "success or failure"
Feb  1 13:11:47.304: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5e73c6f9-44f4-11ea-a88d-0242ac110005 container test-container: 
STEP: delete the pod
Feb  1 13:11:47.671: INFO: Waiting for pod pod-5e73c6f9-44f4-11ea-a88d-0242ac110005 to disappear
Feb  1 13:11:47.718: INFO: Pod pod-5e73c6f9-44f4-11ea-a88d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  1 13:11:47.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-72j8m" for this suite.
Feb  1 13:11:53.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  1 13:11:53.954: INFO: namespace: e2e-tests-emptydir-72j8m, resource: bindings, ignored listing per whitelist
Feb  1 13:11:54.182: INFO: namespace e2e-tests-emptydir-72j8m deletion completed in 6.453470028s

• [SLOW TEST:20.843 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSFeb  1 13:11:54.183: INFO: Running AfterSuite actions on all nodes
Feb  1 13:11:54.183: INFO: Running AfterSuite actions on node 1
Feb  1 13:11:54.183: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8678.858 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS