I0207 10:47:14.720707 9 e2e.go:224] Starting e2e run "3324cb40-4997-11ea-abae-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581072433 - Will randomize all specs Will run 201 of 2164 specs Feb 7 10:47:15.260: INFO: >>> kubeConfig: /root/.kube/config Feb 7 10:47:15.265: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 7 10:47:15.296: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 7 10:47:15.362: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 7 10:47:15.363: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 7 10:47:15.363: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 7 10:47:15.375: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 7 10:47:15.375: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 7 10:47:15.375: INFO: e2e test version: v1.13.12 Feb 7 10:47:15.376: INFO: kube-apiserver version: v1.13.8 SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:47:15.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 7 10:47:15.548: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 7 10:47:15.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kbw2q' Feb 7 10:47:17.150: INFO: stderr: "" Feb 7 10:47:17.150: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 7 10:47:17.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kbw2q' Feb 7 10:47:23.480: INFO: stderr: "" Feb 7 10:47:23.480: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:47:23.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kbw2q" for this suite. Feb 7 10:47:29.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:47:29.804: INFO: namespace: e2e-tests-kubectl-kbw2q, resource: bindings, ignored listing per whitelist Feb 7 10:47:29.825: INFO: namespace e2e-tests-kubectl-kbw2q deletion completed in 6.320910009s • [SLOW TEST:14.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:47:29.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-md4sh/secret-test-3cf67f6e-4997-11ea-abae-0242ac110005 STEP: Creating a pod to test consume secrets Feb 7 10:47:30.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-md4sh" to be "success or failure" Feb 7 10:47:30.116: INFO: Pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.876079ms Feb 7 10:47:32.138: INFO: Pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044397135s Feb 7 10:47:34.152: INFO: Pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057988811s Feb 7 10:47:36.202: INFO: Pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108666163s Feb 7 10:47:38.220: INFO: Pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126239915s Feb 7 10:47:40.235: INFO: Pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140883641s STEP: Saw pod success Feb 7 10:47:40.235: INFO: Pod "pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005" satisfied condition "success or failure" Feb 7 10:47:40.241: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005 container env-test: STEP: delete the pod Feb 7 10:47:40.400: INFO: Waiting for pod pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005 to disappear Feb 7 10:47:40.412: INFO: Pod pod-configmaps-3d035e44-4997-11ea-abae-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:47:40.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-md4sh" for this suite. Feb 7 10:47:47.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:47:48.018: INFO: namespace: e2e-tests-secrets-md4sh, resource: bindings, ignored listing per whitelist Feb 7 10:47:48.203: INFO: namespace e2e-tests-secrets-md4sh deletion completed in 7.776364317s • [SLOW TEST:18.377 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:47:48.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 7 10:48:00.684: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-47f2627e-4997-11ea-abae-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-hhjvp", SelfLink:"/api/v1/namespaces/e2e-tests-pods-hhjvp/pods/pod-submit-remove-47f2627e-4997-11ea-abae-0242ac110005", UID:"47f3ae32-4997-11ea-a994-fa163e34d433", ResourceVersion:"20850608", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716669268, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"417810828"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lpwcz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0019fc5c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lpwcz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019c1b18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000e10d80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019c1b50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019c1b70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0019c1b78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019c1b7c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716669268, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716669279, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716669279, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716669268, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0018c0620), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0018c0640), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://d24c0698142a6aad8d93fec90e519151d793bfc3d97f73c18b2ad723fb0ac0b4"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:48:12.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hhjvp" for this suite. Feb 7 10:48:18.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:48:18.922: INFO: namespace: e2e-tests-pods-hhjvp, resource: bindings, ignored listing per whitelist Feb 7 10:48:18.989: INFO: namespace e2e-tests-pods-hhjvp deletion completed in 6.326895561s • [SLOW TEST:30.786 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:48:18.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 7 10:48:29.482: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:48:57.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-c64kh" for this suite. Feb 7 10:49:03.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:49:04.138: INFO: namespace: e2e-tests-namespaces-c64kh, resource: bindings, ignored listing per whitelist Feb 7 10:49:04.138: INFO: namespace e2e-tests-namespaces-c64kh deletion completed in 6.386976034s STEP: Destroying namespace "e2e-tests-nsdeletetest-5h584" for this suite. Feb 7 10:49:04.142: INFO: Namespace e2e-tests-nsdeletetest-5h584 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-zchfp" for this suite. Feb 7 10:49:10.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:49:10.275: INFO: namespace: e2e-tests-nsdeletetest-zchfp, resource: bindings, ignored listing per whitelist Feb 7 10:49:10.360: INFO: namespace e2e-tests-nsdeletetest-zchfp deletion completed in 6.217743464s • [SLOW TEST:51.370 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:49:10.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0207 10:49:52.720082 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 7 10:49:52.720: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:49:52.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2kk28" for this suite. Feb 7 10:50:00.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:50:00.814: INFO: namespace: e2e-tests-gc-2kk28, resource: bindings, ignored listing per whitelist Feb 7 10:50:00.918: INFO: namespace e2e-tests-gc-2kk28 deletion completed in 8.194778046s • [SLOW TEST:50.559 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:50:00.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 7 10:50:26.547: INFO: Successfully updated pod "pod-update-activedeadlineseconds-98aaac7e-4997-11ea-abae-0242ac110005" Feb 7 10:50:26.547: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-98aaac7e-4997-11ea-abae-0242ac110005" in namespace "e2e-tests-pods-jpfxw" to be "terminated due to deadline exceeded" Feb 7 10:50:26.623: INFO: Pod "pod-update-activedeadlineseconds-98aaac7e-4997-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 75.503621ms Feb 7 10:50:28.641: INFO: Pod "pod-update-activedeadlineseconds-98aaac7e-4997-11ea-abae-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.093203624s Feb 7 10:50:28.641: INFO: Pod "pod-update-activedeadlineseconds-98aaac7e-4997-11ea-abae-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:50:28.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jpfxw" for this suite. Feb 7 10:50:36.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:50:36.817: INFO: namespace: e2e-tests-pods-jpfxw, resource: bindings, ignored listing per whitelist Feb 7 10:50:36.877: INFO: namespace e2e-tests-pods-jpfxw deletion completed in 8.219754436s • [SLOW TEST:35.958 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:50:36.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 7 10:50:37.136: INFO: Waiting up to 5m0s for pod "pod-ac80658f-4997-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-ffxrt" to be "success or failure" Feb 7 10:50:37.162: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.177822ms Feb 7 10:50:39.175: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039721311s Feb 7 10:50:41.190: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053984445s Feb 7 10:50:43.213: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077078271s Feb 7 10:50:45.369: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23320261s Feb 7 10:50:47.440: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.30434142s Feb 7 10:50:49.467: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 12.33130196s Feb 7 10:50:51.477: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.34095428s STEP: Saw pod success Feb 7 10:50:51.477: INFO: Pod "pod-ac80658f-4997-11ea-abae-0242ac110005" satisfied condition "success or failure" Feb 7 10:50:51.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ac80658f-4997-11ea-abae-0242ac110005 container test-container: STEP: delete the pod Feb 7 10:50:51.980: INFO: Waiting for pod pod-ac80658f-4997-11ea-abae-0242ac110005 to disappear Feb 7 10:50:52.188: INFO: Pod pod-ac80658f-4997-11ea-abae-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:50:52.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ffxrt" for this suite. Feb 7 10:51:00.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:51:00.636: INFO: namespace: e2e-tests-emptydir-ffxrt, resource: bindings, ignored listing per whitelist Feb 7 10:51:00.721: INFO: namespace e2e-tests-emptydir-ffxrt deletion completed in 8.515173755s • [SLOW TEST:23.843 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:51:00.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 7 10:51:01.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-r2wqx" to be "success or failure" Feb 7 10:51:01.360: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 162.759943ms Feb 7 10:51:03.632: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435174782s Feb 7 10:51:05.647: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.45025575s Feb 7 10:51:08.106: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.90914822s Feb 7 10:51:10.130: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.933004161s Feb 7 10:51:12.169: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.971755419s Feb 7 10:51:14.212: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.01485143s STEP: Saw pod success Feb 7 10:51:14.212: INFO: Pod "downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005" satisfied condition "success or failure" Feb 7 10:51:14.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005 container client-container: STEP: delete the pod Feb 7 10:51:14.537: INFO: Waiting for pod downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005 to disappear Feb 7 10:51:14.562: INFO: Pod downwardapi-volume-bad72308-4997-11ea-abae-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:51:14.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r2wqx" for this suite. Feb 7 10:51:20.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:51:20.885: INFO: namespace: e2e-tests-projected-r2wqx, resource: bindings, ignored listing per whitelist Feb 7 10:51:20.895: INFO: namespace e2e-tests-projected-r2wqx deletion completed in 6.310069698s • [SLOW TEST:20.172 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:51:20.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-xz6d STEP: Creating a pod to test atomic-volume-subpath Feb 7 10:51:21.115: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xz6d" in namespace "e2e-tests-subpath-sc7jr" to be "success or failure" Feb 7 10:51:21.264: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 149.239926ms Feb 7 10:51:23.325: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209905565s Feb 7 10:51:25.347: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231589583s Feb 7 10:51:28.206: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.090557229s Feb 7 10:51:30.218: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.103078702s Feb 7 10:51:32.232: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.117160345s Feb 7 10:51:34.325: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.209649481s Feb 7 10:51:36.344: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.228794648s Feb 7 10:51:38.359: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.244375131s Feb 7 10:51:40.382: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 19.266715279s Feb 7 10:51:42.406: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 21.291000941s Feb 7 10:51:44.429: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 23.313564888s Feb 7 10:51:46.458: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 25.343183263s Feb 7 10:51:48.512: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 27.396626606s Feb 7 10:51:50.586: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 29.470639586s Feb 7 10:51:52.648: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 31.533097427s Feb 7 10:51:54.684: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 33.568583233s Feb 7 10:51:56.701: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Running", Reason="", readiness=false. Elapsed: 35.5857955s Feb 7 10:51:58.720: INFO: Pod "pod-subpath-test-configmap-xz6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.605097314s STEP: Saw pod success Feb 7 10:51:58.720: INFO: Pod "pod-subpath-test-configmap-xz6d" satisfied condition "success or failure" Feb 7 10:51:58.731: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-xz6d container test-container-subpath-configmap-xz6d: STEP: delete the pod Feb 7 10:51:59.523: INFO: Waiting for pod pod-subpath-test-configmap-xz6d to disappear Feb 7 10:51:59.824: INFO: Pod pod-subpath-test-configmap-xz6d no longer exists STEP: Deleting pod pod-subpath-test-configmap-xz6d Feb 7 10:51:59.824: INFO: Deleting pod "pod-subpath-test-configmap-xz6d" in namespace "e2e-tests-subpath-sc7jr" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:51:59.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-sc7jr" for this suite. Feb 7 10:52:05.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:52:05.917: INFO: namespace: e2e-tests-subpath-sc7jr, resource: bindings, ignored listing per whitelist Feb 7 10:52:06.020: INFO: namespace e2e-tests-subpath-sc7jr deletion completed in 6.172924527s • [SLOW TEST:45.124 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:52:06.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 7 10:52:06.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-br9b4" to be "success or failure" Feb 7 10:52:06.352: INFO: Pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 115.079003ms Feb 7 10:52:08.379: INFO: Pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14237638s Feb 7 10:52:10.396: INFO: Pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158924255s Feb 7 10:52:12.633: INFO: Pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396279173s Feb 7 10:52:15.152: INFO: Pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.915289477s Feb 7 10:52:17.177: INFO: Pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.939963674s STEP: Saw pod success Feb 7 10:52:17.177: INFO: Pod "downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005" satisfied condition "success or failure" Feb 7 10:52:17.193: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005 container client-container: STEP: delete the pod Feb 7 10:52:17.294: INFO: Waiting for pod downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005 to disappear Feb 7 10:52:17.301: INFO: Pod downwardapi-volume-e1997f93-4997-11ea-abae-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:52:17.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-br9b4" for this suite. Feb 7 10:52:23.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:52:23.510: INFO: namespace: e2e-tests-projected-br9b4, resource: bindings, ignored listing per whitelist Feb 7 10:52:23.523: INFO: namespace e2e-tests-projected-br9b4 deletion completed in 6.212608367s • [SLOW TEST:17.504 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:52:23.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 7 10:52:23.698: INFO: Waiting up to 5m0s for pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005" in namespace "e2e-tests-var-expansion-gqxfj" to be "success or failure" Feb 7 10:52:23.752: INFO: Pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 54.419247ms Feb 7 10:52:26.137: INFO: Pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439567761s Feb 7 10:52:28.147: INFO: Pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449487881s Feb 7 10:52:30.601: INFO: Pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.902896571s Feb 7 10:52:32.648: INFO: Pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.949996939s Feb 7 10:52:34.667: INFO: Pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.968929354s STEP: Saw pod success Feb 7 10:52:34.667: INFO: Pod "var-expansion-ec048a0d-4997-11ea-abae-0242ac110005" satisfied condition "success or failure" Feb 7 10:52:34.670: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ec048a0d-4997-11ea-abae-0242ac110005 container dapi-container: STEP: delete the pod Feb 7 10:52:35.030: INFO: Waiting for pod var-expansion-ec048a0d-4997-11ea-abae-0242ac110005 to disappear Feb 7 10:52:35.048: INFO: Pod var-expansion-ec048a0d-4997-11ea-abae-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:52:35.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-gqxfj" for this suite. Feb 7 10:52:41.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:52:41.102: INFO: namespace: e2e-tests-var-expansion-gqxfj, resource: bindings, ignored listing per whitelist Feb 7 10:52:41.184: INFO: namespace e2e-tests-var-expansion-gqxfj deletion completed in 6.127005984s • [SLOW TEST:17.660 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:52:41.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 7 10:52:41.559: INFO: Waiting up to 5m0s for pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005" in namespace "e2e-tests-containers-dlnk6" to be "success or failure" Feb 7 10:52:41.598: INFO: Pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.340256ms Feb 7 10:52:43.632: INFO: Pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072628237s Feb 7 10:52:45.645: INFO: Pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085748661s Feb 7 10:52:47.662: INFO: Pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102721949s Feb 7 10:52:49.678: INFO: Pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11911421s Feb 7 10:52:51.715: INFO: Pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155419217s STEP: Saw pod success Feb 7 10:52:51.715: INFO: Pod "client-containers-f6a2be7b-4997-11ea-abae-0242ac110005" satisfied condition "success or failure" Feb 7 10:52:51.722: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f6a2be7b-4997-11ea-abae-0242ac110005 container test-container: STEP: delete the pod Feb 7 10:52:51.941: INFO: Waiting for pod client-containers-f6a2be7b-4997-11ea-abae-0242ac110005 to disappear Feb 7 10:52:51.962: INFO: Pod client-containers-f6a2be7b-4997-11ea-abae-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:52:51.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-dlnk6" for this suite. Feb 7 10:52:58.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:52:58.113: INFO: namespace: e2e-tests-containers-dlnk6, resource: bindings, ignored listing per whitelist Feb 7 10:52:58.185: INFO: namespace e2e-tests-containers-dlnk6 deletion completed in 6.215542032s • [SLOW TEST:17.000 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:52:58.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:53:08.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-n74kn" for this suite. Feb 7 10:53:54.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:53:55.013: INFO: namespace: e2e-tests-kubelet-test-n74kn, resource: bindings, ignored listing per whitelist Feb 7 10:53:55.040: INFO: namespace e2e-tests-kubelet-test-n74kn deletion completed in 46.224400446s • [SLOW TEST:56.855 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:53:55.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2294511d-4998-11ea-abae-0242ac110005 STEP: Creating a pod to test consume secrets Feb 7 10:53:55.250: INFO: Waiting up to 5m0s for pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-zwfg7" to be "success or failure" Feb 7 10:53:55.356: INFO: Pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 105.15416ms Feb 7 10:53:57.371: INFO: Pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120846975s Feb 7 10:53:59.391: INFO: Pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140821672s Feb 7 10:54:01.405: INFO: Pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154027962s Feb 7 10:54:03.416: INFO: Pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165285929s Feb 7 10:54:05.446: INFO: Pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195593654s STEP: Saw pod success Feb 7 10:54:05.446: INFO: Pod "pod-secrets-2295f30d-4998-11ea-abae-0242ac110005" satisfied condition "success or failure" Feb 7 10:54:05.452: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2295f30d-4998-11ea-abae-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 7 10:54:06.057: INFO: Waiting for pod pod-secrets-2295f30d-4998-11ea-abae-0242ac110005 to disappear Feb 7 10:54:06.311: INFO: Pod pod-secrets-2295f30d-4998-11ea-abae-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:54:06.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zwfg7" for this suite. Feb 7 10:54:12.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:54:12.612: INFO: namespace: e2e-tests-secrets-zwfg7, resource: bindings, ignored listing per whitelist Feb 7 10:54:12.801: INFO: namespace e2e-tests-secrets-zwfg7 deletion completed in 6.473323402s • [SLOW TEST:17.760 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:54:12.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 7 10:54:13.057: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 7 10:54:13.073: INFO: Waiting for terminating namespaces to be deleted... Feb 7 10:54:13.150: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 7 10:54:13.168: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 7 10:54:13.168: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 7 10:54:13.168: INFO: Container coredns ready: true, restart count 0 Feb 7 10:54:13.168: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 7 10:54:13.168: INFO: Container kube-proxy ready: true, restart count 0 Feb 7 10:54:13.168: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 7 10:54:13.168: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 7 10:54:13.168: INFO: Container weave ready: true, restart count 0 Feb 7 10:54:13.168: INFO: Container weave-npc ready: true, restart count 0 Feb 7 10:54:13.168: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 7 10:54:13.168: INFO: Container coredns ready: true, restart count 0 Feb 7 10:54:13.168: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 7 10:54:13.168: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 7 10:54:13.254: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2d53d899-4998-11ea-abae-0242ac110005.15f1196220d741c2], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-xzhz5/filler-pod-2d53d899-4998-11ea-abae-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d53d899-4998-11ea-abae-0242ac110005.15f1196322619edb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d53d899-4998-11ea-abae-0242ac110005.15f11963b0d15666], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d53d899-4998-11ea-abae-0242ac110005.15f11963f64afe41], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f11964760af818], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 7 10:54:24.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-xzhz5" for this suite. Feb 7 10:54:33.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 7 10:54:33.430: INFO: namespace: e2e-tests-sched-pred-xzhz5, resource: bindings, ignored listing per whitelist Feb 7 10:54:33.720: INFO: namespace e2e-tests-sched-pred-xzhz5 deletion completed in 8.924678922s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:20.919 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 7 10:54:33.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 7 10:54:35.064: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.318505ms)
Feb  7 10:54:35.116: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.505589ms)
Feb  7 10:54:35.149: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 33.191214ms)
Feb  7 10:54:35.168: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.982295ms)
Feb  7 10:54:35.185: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.740798ms)
Feb  7 10:54:35.205: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.751675ms)
Feb  7 10:54:35.289: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 83.834438ms)
Feb  7 10:54:35.301: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.424398ms)
Feb  7 10:54:35.311: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.270825ms)
Feb  7 10:54:35.319: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.092889ms)
Feb  7 10:54:35.323: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.547954ms)
Feb  7 10:54:35.328: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.797188ms)
Feb  7 10:54:35.332: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.782506ms)
Feb  7 10:54:35.344: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.115683ms)
Feb  7 10:54:35.356: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.502924ms)
Feb  7 10:54:35.371: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.19937ms)
Feb  7 10:54:35.381: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.277367ms)
Feb  7 10:54:35.388: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.600096ms)
Feb  7 10:54:35.397: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.957719ms)
Feb  7 10:54:35.404: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.22142ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 10:54:35.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-mfbwd" for this suite.
Feb  7 10:54:41.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 10:54:41.612: INFO: namespace: e2e-tests-proxy-mfbwd, resource: bindings, ignored listing per whitelist
Feb  7 10:54:41.652: INFO: namespace e2e-tests-proxy-mfbwd deletion completed in 6.237930486s

• [SLOW TEST:7.931 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 10:54:41.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-3e63e659-4998-11ea-abae-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-3e63e659-4998-11ea-abae-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 10:56:06.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-52n8j" for this suite.
Feb  7 10:56:30.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 10:56:30.830: INFO: namespace: e2e-tests-projected-52n8j, resource: bindings, ignored listing per whitelist
Feb  7 10:56:30.998: INFO: namespace e2e-tests-projected-52n8j deletion completed in 24.255219882s

• [SLOW TEST:109.345 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 10:56:30.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 10:56:43.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-btbhz" for this suite.
Feb  7 10:56:49.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 10:56:49.405: INFO: namespace: e2e-tests-kubelet-test-btbhz, resource: bindings, ignored listing per whitelist
Feb  7 10:56:49.538: INFO: namespace e2e-tests-kubelet-test-btbhz deletion completed in 6.219084666s

• [SLOW TEST:18.540 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 10:56:49.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-8a8d521b-4998-11ea-abae-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-8a8d5293-4998-11ea-abae-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8a8d521b-4998-11ea-abae-0242ac110005
STEP: Updating configmap cm-test-opt-upd-8a8d5293-4998-11ea-abae-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-8a8d52ce-4998-11ea-abae-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 10:58:08.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-msf6j" for this suite.
Feb  7 10:58:32.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 10:58:32.270: INFO: namespace: e2e-tests-configmap-msf6j, resource: bindings, ignored listing per whitelist
Feb  7 10:58:32.400: INFO: namespace e2e-tests-configmap-msf6j deletion completed in 24.202470647s

• [SLOW TEST:102.862 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 10:58:32.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  7 10:58:57.067: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:58:57.067: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:58:57.164817       9 log.go:172] (0xc000b05080) (0xc0008826e0) Create stream
I0207 10:58:57.164877       9 log.go:172] (0xc000b05080) (0xc0008826e0) Stream added, broadcasting: 1
I0207 10:58:57.169731       9 log.go:172] (0xc000b05080) Reply frame received for 1
I0207 10:58:57.169769       9 log.go:172] (0xc000b05080) (0xc000eea460) Create stream
I0207 10:58:57.169781       9 log.go:172] (0xc000b05080) (0xc000eea460) Stream added, broadcasting: 3
I0207 10:58:57.171037       9 log.go:172] (0xc000b05080) Reply frame received for 3
I0207 10:58:57.171062       9 log.go:172] (0xc000b05080) (0xc001dba1e0) Create stream
I0207 10:58:57.171075       9 log.go:172] (0xc000b05080) (0xc001dba1e0) Stream added, broadcasting: 5
I0207 10:58:57.172913       9 log.go:172] (0xc000b05080) Reply frame received for 5
I0207 10:58:57.357365       9 log.go:172] (0xc000b05080) Data frame received for 3
I0207 10:58:57.357503       9 log.go:172] (0xc000eea460) (3) Data frame handling
I0207 10:58:57.357562       9 log.go:172] (0xc000eea460) (3) Data frame sent
I0207 10:58:57.531620       9 log.go:172] (0xc000b05080) (0xc001dba1e0) Stream removed, broadcasting: 5
I0207 10:58:57.531885       9 log.go:172] (0xc000b05080) Data frame received for 1
I0207 10:58:57.531964       9 log.go:172] (0xc000b05080) (0xc000eea460) Stream removed, broadcasting: 3
I0207 10:58:57.532031       9 log.go:172] (0xc0008826e0) (1) Data frame handling
I0207 10:58:57.532073       9 log.go:172] (0xc0008826e0) (1) Data frame sent
I0207 10:58:57.532112       9 log.go:172] (0xc000b05080) (0xc0008826e0) Stream removed, broadcasting: 1
I0207 10:58:57.532140       9 log.go:172] (0xc000b05080) Go away received
I0207 10:58:57.532567       9 log.go:172] (0xc000b05080) (0xc0008826e0) Stream removed, broadcasting: 1
I0207 10:58:57.532607       9 log.go:172] (0xc000b05080) (0xc000eea460) Stream removed, broadcasting: 3
I0207 10:58:57.532623       9 log.go:172] (0xc000b05080) (0xc001dba1e0) Stream removed, broadcasting: 5
Feb  7 10:58:57.532: INFO: Exec stderr: ""
Feb  7 10:58:57.532: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:58:57.532: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:58:57.615002       9 log.go:172] (0xc001b964d0) (0xc000620500) Create stream
I0207 10:58:57.615163       9 log.go:172] (0xc001b964d0) (0xc000620500) Stream added, broadcasting: 1
I0207 10:58:57.619319       9 log.go:172] (0xc001b964d0) Reply frame received for 1
I0207 10:58:57.619393       9 log.go:172] (0xc001b964d0) (0xc000882960) Create stream
I0207 10:58:57.619410       9 log.go:172] (0xc001b964d0) (0xc000882960) Stream added, broadcasting: 3
I0207 10:58:57.620436       9 log.go:172] (0xc001b964d0) Reply frame received for 3
I0207 10:58:57.620469       9 log.go:172] (0xc001b964d0) (0xc000882be0) Create stream
I0207 10:58:57.620483       9 log.go:172] (0xc001b964d0) (0xc000882be0) Stream added, broadcasting: 5
I0207 10:58:57.621478       9 log.go:172] (0xc001b964d0) Reply frame received for 5
I0207 10:58:57.776800       9 log.go:172] (0xc001b964d0) Data frame received for 3
I0207 10:58:57.776994       9 log.go:172] (0xc000882960) (3) Data frame handling
I0207 10:58:57.777035       9 log.go:172] (0xc000882960) (3) Data frame sent
I0207 10:58:57.927115       9 log.go:172] (0xc001b964d0) Data frame received for 1
I0207 10:58:57.927243       9 log.go:172] (0xc000620500) (1) Data frame handling
I0207 10:58:57.927277       9 log.go:172] (0xc000620500) (1) Data frame sent
I0207 10:58:57.927314       9 log.go:172] (0xc001b964d0) (0xc000620500) Stream removed, broadcasting: 1
I0207 10:58:57.927832       9 log.go:172] (0xc001b964d0) (0xc000882960) Stream removed, broadcasting: 3
I0207 10:58:57.927891       9 log.go:172] (0xc001b964d0) (0xc000882be0) Stream removed, broadcasting: 5
I0207 10:58:57.927972       9 log.go:172] (0xc001b964d0) (0xc000620500) Stream removed, broadcasting: 1
I0207 10:58:57.928003       9 log.go:172] (0xc001b964d0) (0xc000882960) Stream removed, broadcasting: 3
I0207 10:58:57.928008       9 log.go:172] (0xc001b964d0) (0xc000882be0) Stream removed, broadcasting: 5
Feb  7 10:58:57.928: INFO: Exec stderr: ""
Feb  7 10:58:57.928: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:58:57.928: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:58:57.928410       9 log.go:172] (0xc001b964d0) Go away received
I0207 10:58:58.017545       9 log.go:172] (0xc0018c22c0) (0xc001dba500) Create stream
I0207 10:58:58.017748       9 log.go:172] (0xc0018c22c0) (0xc001dba500) Stream added, broadcasting: 1
I0207 10:58:58.026850       9 log.go:172] (0xc0018c22c0) Reply frame received for 1
I0207 10:58:58.026898       9 log.go:172] (0xc0018c22c0) (0xc000620780) Create stream
I0207 10:58:58.026909       9 log.go:172] (0xc0018c22c0) (0xc000620780) Stream added, broadcasting: 3
I0207 10:58:58.029646       9 log.go:172] (0xc0018c22c0) Reply frame received for 3
I0207 10:58:58.029677       9 log.go:172] (0xc0018c22c0) (0xc000882fa0) Create stream
I0207 10:58:58.029685       9 log.go:172] (0xc0018c22c0) (0xc000882fa0) Stream added, broadcasting: 5
I0207 10:58:58.030992       9 log.go:172] (0xc0018c22c0) Reply frame received for 5
I0207 10:58:58.175776       9 log.go:172] (0xc0018c22c0) Data frame received for 3
I0207 10:58:58.175880       9 log.go:172] (0xc000620780) (3) Data frame handling
I0207 10:58:58.175951       9 log.go:172] (0xc000620780) (3) Data frame sent
I0207 10:58:58.322773       9 log.go:172] (0xc0018c22c0) (0xc000620780) Stream removed, broadcasting: 3
I0207 10:58:58.322907       9 log.go:172] (0xc0018c22c0) Data frame received for 1
I0207 10:58:58.322987       9 log.go:172] (0xc001dba500) (1) Data frame handling
I0207 10:58:58.323027       9 log.go:172] (0xc001dba500) (1) Data frame sent
I0207 10:58:58.323064       9 log.go:172] (0xc0018c22c0) (0xc000882fa0) Stream removed, broadcasting: 5
I0207 10:58:58.323130       9 log.go:172] (0xc0018c22c0) (0xc001dba500) Stream removed, broadcasting: 1
I0207 10:58:58.323158       9 log.go:172] (0xc0018c22c0) Go away received
I0207 10:58:58.323418       9 log.go:172] (0xc0018c22c0) (0xc001dba500) Stream removed, broadcasting: 1
I0207 10:58:58.323461       9 log.go:172] (0xc0018c22c0) (0xc000620780) Stream removed, broadcasting: 3
I0207 10:58:58.323514       9 log.go:172] (0xc0018c22c0) (0xc000882fa0) Stream removed, broadcasting: 5
Feb  7 10:58:58.323: INFO: Exec stderr: ""
Feb  7 10:58:58.323: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:58:58.323: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:58:58.416156       9 log.go:172] (0xc000b05810) (0xc000883ae0) Create stream
I0207 10:58:58.416384       9 log.go:172] (0xc000b05810) (0xc000883ae0) Stream added, broadcasting: 1
I0207 10:58:58.422968       9 log.go:172] (0xc000b05810) Reply frame received for 1
I0207 10:58:58.423019       9 log.go:172] (0xc000b05810) (0xc000620a00) Create stream
I0207 10:58:58.423033       9 log.go:172] (0xc000b05810) (0xc000620a00) Stream added, broadcasting: 3
I0207 10:58:58.424783       9 log.go:172] (0xc000b05810) Reply frame received for 3
I0207 10:58:58.424805       9 log.go:172] (0xc000b05810) (0xc000eea500) Create stream
I0207 10:58:58.424814       9 log.go:172] (0xc000b05810) (0xc000eea500) Stream added, broadcasting: 5
I0207 10:58:58.425552       9 log.go:172] (0xc000b05810) Reply frame received for 5
I0207 10:58:58.805369       9 log.go:172] (0xc000b05810) Data frame received for 3
I0207 10:58:58.805574       9 log.go:172] (0xc000620a00) (3) Data frame handling
I0207 10:58:58.805602       9 log.go:172] (0xc000620a00) (3) Data frame sent
I0207 10:58:58.919832       9 log.go:172] (0xc000b05810) (0xc000620a00) Stream removed, broadcasting: 3
I0207 10:58:58.919962       9 log.go:172] (0xc000b05810) (0xc000eea500) Stream removed, broadcasting: 5
I0207 10:58:58.920031       9 log.go:172] (0xc000b05810) Data frame received for 1
I0207 10:58:58.920044       9 log.go:172] (0xc000883ae0) (1) Data frame handling
I0207 10:58:58.920062       9 log.go:172] (0xc000883ae0) (1) Data frame sent
I0207 10:58:58.920076       9 log.go:172] (0xc000b05810) (0xc000883ae0) Stream removed, broadcasting: 1
I0207 10:58:58.920090       9 log.go:172] (0xc000b05810) Go away received
I0207 10:58:58.920644       9 log.go:172] (0xc000b05810) (0xc000883ae0) Stream removed, broadcasting: 1
I0207 10:58:58.920665       9 log.go:172] (0xc000b05810) (0xc000620a00) Stream removed, broadcasting: 3
I0207 10:58:58.920678       9 log.go:172] (0xc000b05810) (0xc000eea500) Stream removed, broadcasting: 5
Feb  7 10:58:58.920: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  7 10:58:58.920: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:58:58.920: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:58:58.990998       9 log.go:172] (0xc001b96b00) (0xc000621400) Create stream
I0207 10:58:58.991045       9 log.go:172] (0xc001b96b00) (0xc000621400) Stream added, broadcasting: 1
I0207 10:58:58.995800       9 log.go:172] (0xc001b96b00) Reply frame received for 1
I0207 10:58:58.995864       9 log.go:172] (0xc001b96b00) (0xc00058e280) Create stream
I0207 10:58:58.995886       9 log.go:172] (0xc001b96b00) (0xc00058e280) Stream added, broadcasting: 3
I0207 10:58:58.996976       9 log.go:172] (0xc001b96b00) Reply frame received for 3
I0207 10:58:58.997001       9 log.go:172] (0xc001b96b00) (0xc000621680) Create stream
I0207 10:58:58.997018       9 log.go:172] (0xc001b96b00) (0xc000621680) Stream added, broadcasting: 5
I0207 10:58:58.998048       9 log.go:172] (0xc001b96b00) Reply frame received for 5
I0207 10:58:59.099089       9 log.go:172] (0xc001b96b00) Data frame received for 3
I0207 10:58:59.099151       9 log.go:172] (0xc00058e280) (3) Data frame handling
I0207 10:58:59.099165       9 log.go:172] (0xc00058e280) (3) Data frame sent
I0207 10:58:59.248887       9 log.go:172] (0xc001b96b00) Data frame received for 1
I0207 10:58:59.249034       9 log.go:172] (0xc001b96b00) (0xc000621680) Stream removed, broadcasting: 5
I0207 10:58:59.249107       9 log.go:172] (0xc000621400) (1) Data frame handling
I0207 10:58:59.249142       9 log.go:172] (0xc000621400) (1) Data frame sent
I0207 10:58:59.249179       9 log.go:172] (0xc001b96b00) (0xc000621400) Stream removed, broadcasting: 1
I0207 10:58:59.249276       9 log.go:172] (0xc001b96b00) (0xc00058e280) Stream removed, broadcasting: 3
I0207 10:58:59.249294       9 log.go:172] (0xc001b96b00) Go away received
I0207 10:58:59.249711       9 log.go:172] (0xc001b96b00) (0xc000621400) Stream removed, broadcasting: 1
I0207 10:58:59.249759       9 log.go:172] (0xc001b96b00) (0xc00058e280) Stream removed, broadcasting: 3
I0207 10:58:59.249973       9 log.go:172] (0xc001b96b00) (0xc000621680) Stream removed, broadcasting: 5
Feb  7 10:58:59.250: INFO: Exec stderr: ""
Feb  7 10:58:59.250: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:58:59.250: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:58:59.316461       9 log.go:172] (0xc001b96fd0) (0xc000621f40) Create stream
I0207 10:58:59.316535       9 log.go:172] (0xc001b96fd0) (0xc000621f40) Stream added, broadcasting: 1
I0207 10:58:59.320829       9 log.go:172] (0xc001b96fd0) Reply frame received for 1
I0207 10:58:59.320861       9 log.go:172] (0xc001b96fd0) (0xc001dba5a0) Create stream
I0207 10:58:59.320885       9 log.go:172] (0xc001b96fd0) (0xc001dba5a0) Stream added, broadcasting: 3
I0207 10:58:59.321852       9 log.go:172] (0xc001b96fd0) Reply frame received for 3
I0207 10:58:59.321873       9 log.go:172] (0xc001b96fd0) (0xc00058e640) Create stream
I0207 10:58:59.321881       9 log.go:172] (0xc001b96fd0) (0xc00058e640) Stream added, broadcasting: 5
I0207 10:58:59.322675       9 log.go:172] (0xc001b96fd0) Reply frame received for 5
I0207 10:58:59.430079       9 log.go:172] (0xc001b96fd0) Data frame received for 3
I0207 10:58:59.430164       9 log.go:172] (0xc001dba5a0) (3) Data frame handling
I0207 10:58:59.430185       9 log.go:172] (0xc001dba5a0) (3) Data frame sent
I0207 10:58:59.564566       9 log.go:172] (0xc001b96fd0) Data frame received for 1
I0207 10:58:59.564661       9 log.go:172] (0xc000621f40) (1) Data frame handling
I0207 10:58:59.564689       9 log.go:172] (0xc000621f40) (1) Data frame sent
I0207 10:58:59.564873       9 log.go:172] (0xc001b96fd0) (0xc00058e640) Stream removed, broadcasting: 5
I0207 10:58:59.564923       9 log.go:172] (0xc001b96fd0) (0xc001dba5a0) Stream removed, broadcasting: 3
I0207 10:58:59.564991       9 log.go:172] (0xc001b96fd0) (0xc000621f40) Stream removed, broadcasting: 1
I0207 10:58:59.565027       9 log.go:172] (0xc001b96fd0) Go away received
I0207 10:58:59.565411       9 log.go:172] (0xc001b96fd0) (0xc000621f40) Stream removed, broadcasting: 1
I0207 10:58:59.565613       9 log.go:172] (0xc001b96fd0) (0xc001dba5a0) Stream removed, broadcasting: 3
I0207 10:58:59.565646       9 log.go:172] (0xc001b96fd0) (0xc00058e640) Stream removed, broadcasting: 5
Feb  7 10:58:59.565: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  7 10:58:59.565: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:58:59.565: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:58:59.646727       9 log.go:172] (0xc000353d90) (0xc000c7c6e0) Create stream
I0207 10:58:59.646860       9 log.go:172] (0xc000353d90) (0xc000c7c6e0) Stream added, broadcasting: 1
I0207 10:58:59.656335       9 log.go:172] (0xc000353d90) Reply frame received for 1
I0207 10:58:59.656425       9 log.go:172] (0xc000353d90) (0xc000343cc0) Create stream
I0207 10:58:59.656451       9 log.go:172] (0xc000353d90) (0xc000343cc0) Stream added, broadcasting: 3
I0207 10:58:59.657973       9 log.go:172] (0xc000353d90) Reply frame received for 3
I0207 10:58:59.658005       9 log.go:172] (0xc000353d90) (0xc001dba6e0) Create stream
I0207 10:58:59.658022       9 log.go:172] (0xc000353d90) (0xc001dba6e0) Stream added, broadcasting: 5
I0207 10:58:59.659489       9 log.go:172] (0xc000353d90) Reply frame received for 5
I0207 10:58:59.824556       9 log.go:172] (0xc000353d90) Data frame received for 3
I0207 10:58:59.824739       9 log.go:172] (0xc000343cc0) (3) Data frame handling
I0207 10:58:59.824776       9 log.go:172] (0xc000343cc0) (3) Data frame sent
I0207 10:59:00.066814       9 log.go:172] (0xc000353d90) Data frame received for 1
I0207 10:59:00.066901       9 log.go:172] (0xc000353d90) (0xc000343cc0) Stream removed, broadcasting: 3
I0207 10:59:00.066953       9 log.go:172] (0xc000c7c6e0) (1) Data frame handling
I0207 10:59:00.066978       9 log.go:172] (0xc000c7c6e0) (1) Data frame sent
I0207 10:59:00.067022       9 log.go:172] (0xc000353d90) (0xc001dba6e0) Stream removed, broadcasting: 5
I0207 10:59:00.067064       9 log.go:172] (0xc000353d90) (0xc000c7c6e0) Stream removed, broadcasting: 1
I0207 10:59:00.067084       9 log.go:172] (0xc000353d90) Go away received
I0207 10:59:00.067404       9 log.go:172] (0xc000353d90) (0xc000c7c6e0) Stream removed, broadcasting: 1
I0207 10:59:00.067416       9 log.go:172] (0xc000353d90) (0xc000343cc0) Stream removed, broadcasting: 3
I0207 10:59:00.067421       9 log.go:172] (0xc000353d90) (0xc001dba6e0) Stream removed, broadcasting: 5
Feb  7 10:59:00.067: INFO: Exec stderr: ""
Feb  7 10:59:00.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:59:00.067: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:59:00.143083       9 log.go:172] (0xc0018242c0) (0xc000eea8c0) Create stream
I0207 10:59:00.143152       9 log.go:172] (0xc0018242c0) (0xc000eea8c0) Stream added, broadcasting: 1
I0207 10:59:00.146899       9 log.go:172] (0xc0018242c0) Reply frame received for 1
I0207 10:59:00.146928       9 log.go:172] (0xc0018242c0) (0xc001dba780) Create stream
I0207 10:59:00.146938       9 log.go:172] (0xc0018242c0) (0xc001dba780) Stream added, broadcasting: 3
I0207 10:59:00.147794       9 log.go:172] (0xc0018242c0) Reply frame received for 3
I0207 10:59:00.147814       9 log.go:172] (0xc0018242c0) (0xc000c7c780) Create stream
I0207 10:59:00.147822       9 log.go:172] (0xc0018242c0) (0xc000c7c780) Stream added, broadcasting: 5
I0207 10:59:00.148500       9 log.go:172] (0xc0018242c0) Reply frame received for 5
I0207 10:59:00.274314       9 log.go:172] (0xc0018242c0) Data frame received for 3
I0207 10:59:00.274353       9 log.go:172] (0xc001dba780) (3) Data frame handling
I0207 10:59:00.274367       9 log.go:172] (0xc001dba780) (3) Data frame sent
I0207 10:59:00.397286       9 log.go:172] (0xc0018242c0) (0xc001dba780) Stream removed, broadcasting: 3
I0207 10:59:00.397396       9 log.go:172] (0xc0018242c0) Data frame received for 1
I0207 10:59:00.397427       9 log.go:172] (0xc000eea8c0) (1) Data frame handling
I0207 10:59:00.397463       9 log.go:172] (0xc000eea8c0) (1) Data frame sent
I0207 10:59:00.397490       9 log.go:172] (0xc0018242c0) (0xc000c7c780) Stream removed, broadcasting: 5
I0207 10:59:00.397519       9 log.go:172] (0xc0018242c0) (0xc000eea8c0) Stream removed, broadcasting: 1
I0207 10:59:00.397553       9 log.go:172] (0xc0018242c0) Go away received
I0207 10:59:00.397871       9 log.go:172] (0xc0018242c0) (0xc000eea8c0) Stream removed, broadcasting: 1
I0207 10:59:00.397972       9 log.go:172] (0xc0018242c0) (0xc001dba780) Stream removed, broadcasting: 3
I0207 10:59:00.397999       9 log.go:172] (0xc0018242c0) (0xc000c7c780) Stream removed, broadcasting: 5
Feb  7 10:59:00.398: INFO: Exec stderr: ""
Feb  7 10:59:00.398: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:59:00.398: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:59:00.448899       9 log.go:172] (0xc001bc82c0) (0xc00058ed20) Create stream
I0207 10:59:00.448941       9 log.go:172] (0xc001bc82c0) (0xc00058ed20) Stream added, broadcasting: 1
I0207 10:59:00.454397       9 log.go:172] (0xc001bc82c0) Reply frame received for 1
I0207 10:59:00.454473       9 log.go:172] (0xc001bc82c0) (0xc001dba820) Create stream
I0207 10:59:00.454495       9 log.go:172] (0xc001bc82c0) (0xc001dba820) Stream added, broadcasting: 3
I0207 10:59:00.455747       9 log.go:172] (0xc001bc82c0) Reply frame received for 3
I0207 10:59:00.455807       9 log.go:172] (0xc001bc82c0) (0xc000eea960) Create stream
I0207 10:59:00.455824       9 log.go:172] (0xc001bc82c0) (0xc000eea960) Stream added, broadcasting: 5
I0207 10:59:00.457081       9 log.go:172] (0xc001bc82c0) Reply frame received for 5
I0207 10:59:00.615372       9 log.go:172] (0xc001bc82c0) Data frame received for 3
I0207 10:59:00.615419       9 log.go:172] (0xc001dba820) (3) Data frame handling
I0207 10:59:00.615453       9 log.go:172] (0xc001dba820) (3) Data frame sent
I0207 10:59:00.745473       9 log.go:172] (0xc001bc82c0) (0xc001dba820) Stream removed, broadcasting: 3
I0207 10:59:00.745568       9 log.go:172] (0xc001bc82c0) Data frame received for 1
I0207 10:59:00.745604       9 log.go:172] (0xc001bc82c0) (0xc000eea960) Stream removed, broadcasting: 5
I0207 10:59:00.745647       9 log.go:172] (0xc00058ed20) (1) Data frame handling
I0207 10:59:00.745674       9 log.go:172] (0xc00058ed20) (1) Data frame sent
I0207 10:59:00.745685       9 log.go:172] (0xc001bc82c0) (0xc00058ed20) Stream removed, broadcasting: 1
I0207 10:59:00.745704       9 log.go:172] (0xc001bc82c0) Go away received
I0207 10:59:00.745886       9 log.go:172] (0xc001bc82c0) (0xc00058ed20) Stream removed, broadcasting: 1
I0207 10:59:00.745907       9 log.go:172] (0xc001bc82c0) (0xc001dba820) Stream removed, broadcasting: 3
I0207 10:59:00.745924       9 log.go:172] (0xc001bc82c0) (0xc000eea960) Stream removed, broadcasting: 5
Feb  7 10:59:00.745: INFO: Exec stderr: ""
Feb  7 10:59:00.746: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lkhlf PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 10:59:00.746: INFO: >>> kubeConfig: /root/.kube/config
I0207 10:59:00.830001       9 log.go:172] (0xc000b05ce0) (0xc0001a3220) Create stream
I0207 10:59:00.830045       9 log.go:172] (0xc000b05ce0) (0xc0001a3220) Stream added, broadcasting: 1
I0207 10:59:00.834143       9 log.go:172] (0xc000b05ce0) Reply frame received for 1
I0207 10:59:00.834170       9 log.go:172] (0xc000b05ce0) (0xc00058ee60) Create stream
I0207 10:59:00.834182       9 log.go:172] (0xc000b05ce0) (0xc00058ee60) Stream added, broadcasting: 3
I0207 10:59:00.835525       9 log.go:172] (0xc000b05ce0) Reply frame received for 3
I0207 10:59:00.835544       9 log.go:172] (0xc000b05ce0) (0xc000c7c8c0) Create stream
I0207 10:59:00.835552       9 log.go:172] (0xc000b05ce0) (0xc000c7c8c0) Stream added, broadcasting: 5
I0207 10:59:00.837245       9 log.go:172] (0xc000b05ce0) Reply frame received for 5
I0207 10:59:00.978656       9 log.go:172] (0xc000b05ce0) Data frame received for 3
I0207 10:59:00.978773       9 log.go:172] (0xc00058ee60) (3) Data frame handling
I0207 10:59:00.978890       9 log.go:172] (0xc00058ee60) (3) Data frame sent
I0207 10:59:01.087779       9 log.go:172] (0xc000b05ce0) Data frame received for 1
I0207 10:59:01.087833       9 log.go:172] (0xc0001a3220) (1) Data frame handling
I0207 10:59:01.087855       9 log.go:172] (0xc0001a3220) (1) Data frame sent
I0207 10:59:01.087868       9 log.go:172] (0xc000b05ce0) (0xc0001a3220) Stream removed, broadcasting: 1
I0207 10:59:01.088389       9 log.go:172] (0xc000b05ce0) (0xc00058ee60) Stream removed, broadcasting: 3
I0207 10:59:01.089309       9 log.go:172] (0xc000b05ce0) (0xc000c7c8c0) Stream removed, broadcasting: 5
I0207 10:59:01.089432       9 log.go:172] (0xc000b05ce0) (0xc0001a3220) Stream removed, broadcasting: 1
I0207 10:59:01.089467       9 log.go:172] (0xc000b05ce0) (0xc00058ee60) Stream removed, broadcasting: 3
I0207 10:59:01.089475       9 log.go:172] (0xc000b05ce0) (0xc000c7c8c0) Stream removed, broadcasting: 5
Feb  7 10:59:01.089: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 10:59:01.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-lkhlf" for this suite.
Feb  7 10:59:45.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 10:59:45.237: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-lkhlf, resource: bindings, ignored listing per whitelist
Feb  7 10:59:45.320: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-lkhlf deletion completed in 44.217463298s

• [SLOW TEST:72.920 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 10:59:45.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 10:59:45.726: INFO: Number of nodes with available pods: 0
Feb  7 10:59:45.727: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:47.335: INFO: Number of nodes with available pods: 0
Feb  7 10:59:47.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:47.865: INFO: Number of nodes with available pods: 0
Feb  7 10:59:47.865: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:48.766: INFO: Number of nodes with available pods: 0
Feb  7 10:59:48.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:49.770: INFO: Number of nodes with available pods: 0
Feb  7 10:59:49.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:50.760: INFO: Number of nodes with available pods: 0
Feb  7 10:59:50.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:51.926: INFO: Number of nodes with available pods: 0
Feb  7 10:59:51.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:52.757: INFO: Number of nodes with available pods: 0
Feb  7 10:59:52.757: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:53.765: INFO: Number of nodes with available pods: 0
Feb  7 10:59:53.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:54.846: INFO: Number of nodes with available pods: 1
Feb  7 10:59:54.847: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  7 10:59:54.983: INFO: Number of nodes with available pods: 0
Feb  7 10:59:54.983: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:56.016: INFO: Number of nodes with available pods: 0
Feb  7 10:59:56.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:57.022: INFO: Number of nodes with available pods: 0
Feb  7 10:59:57.022: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:58.024: INFO: Number of nodes with available pods: 0
Feb  7 10:59:58.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 10:59:59.007: INFO: Number of nodes with available pods: 0
Feb  7 10:59:59.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:00.016: INFO: Number of nodes with available pods: 0
Feb  7 11:00:00.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:01.019: INFO: Number of nodes with available pods: 0
Feb  7 11:00:01.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:02.006: INFO: Number of nodes with available pods: 0
Feb  7 11:00:02.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:03.002: INFO: Number of nodes with available pods: 0
Feb  7 11:00:03.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:04.008: INFO: Number of nodes with available pods: 0
Feb  7 11:00:04.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:05.010: INFO: Number of nodes with available pods: 0
Feb  7 11:00:05.010: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:06.019: INFO: Number of nodes with available pods: 0
Feb  7 11:00:06.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:07.022: INFO: Number of nodes with available pods: 0
Feb  7 11:00:07.023: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:08.012: INFO: Number of nodes with available pods: 0
Feb  7 11:00:08.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:09.023: INFO: Number of nodes with available pods: 0
Feb  7 11:00:09.023: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:10.004: INFO: Number of nodes with available pods: 0
Feb  7 11:00:10.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:10.993: INFO: Number of nodes with available pods: 0
Feb  7 11:00:10.993: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:12.039: INFO: Number of nodes with available pods: 0
Feb  7 11:00:12.040: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:13.060: INFO: Number of nodes with available pods: 0
Feb  7 11:00:13.061: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:14.002: INFO: Number of nodes with available pods: 0
Feb  7 11:00:14.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:15.002: INFO: Number of nodes with available pods: 0
Feb  7 11:00:15.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:16.001: INFO: Number of nodes with available pods: 0
Feb  7 11:00:16.001: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:17.345: INFO: Number of nodes with available pods: 0
Feb  7 11:00:17.345: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:18.007: INFO: Number of nodes with available pods: 0
Feb  7 11:00:18.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:19.016: INFO: Number of nodes with available pods: 0
Feb  7 11:00:19.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:20.016: INFO: Number of nodes with available pods: 0
Feb  7 11:00:20.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:00:21.024: INFO: Number of nodes with available pods: 1
Feb  7 11:00:21.024: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dw8qp, will wait for the garbage collector to delete the pods
Feb  7 11:00:21.173: INFO: Deleting DaemonSet.extensions daemon-set took: 85.724545ms
Feb  7 11:00:21.274: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.368325ms
Feb  7 11:00:29.187: INFO: Number of nodes with available pods: 0
Feb  7 11:00:29.187: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 11:00:29.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dw8qp/daemonsets","resourceVersion":"20852214"},"items":null}

Feb  7 11:00:29.205: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dw8qp/pods","resourceVersion":"20852214"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:00:29.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dw8qp" for this suite.
Feb  7 11:00:35.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:00:35.523: INFO: namespace: e2e-tests-daemonsets-dw8qp, resource: bindings, ignored listing per whitelist
Feb  7 11:00:35.523: INFO: namespace e2e-tests-daemonsets-dw8qp deletion completed in 6.301208315s

• [SLOW TEST:50.203 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:00:35.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-1148e863-4999-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:00:35.878: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-sm5mt" to be "success or failure"
Feb  7 11:00:35.924: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.898122ms
Feb  7 11:00:37.939: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060855093s
Feb  7 11:00:39.958: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080418622s
Feb  7 11:00:42.138: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260001212s
Feb  7 11:00:44.165: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28687179s
Feb  7 11:00:46.286: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.408321628s
Feb  7 11:00:48.312: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.433797663s
STEP: Saw pod success
Feb  7 11:00:48.312: INFO: Pod "pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:00:48.329: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 11:00:48.530: INFO: Waiting for pod pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:00:48.550: INFO: Pod pod-projected-configmaps-114a7a36-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:00:48.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sm5mt" for this suite.
Feb  7 11:00:54.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:00:54.856: INFO: namespace: e2e-tests-projected-sm5mt, resource: bindings, ignored listing per whitelist
Feb  7 11:00:54.976: INFO: namespace e2e-tests-projected-sm5mt deletion completed in 6.414259225s

• [SLOW TEST:19.452 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:00:54.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-1ce28d4b-4999-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:00:55.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-864qk" to be "success or failure"
Feb  7 11:00:55.308: INFO: Pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 82.817937ms
Feb  7 11:00:57.326: INFO: Pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100764393s
Feb  7 11:00:59.345: INFO: Pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119409864s
Feb  7 11:01:01.505: INFO: Pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279567056s
Feb  7 11:01:03.526: INFO: Pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300800311s
Feb  7 11:01:05.591: INFO: Pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.365202624s
STEP: Saw pod success
Feb  7 11:01:05.591: INFO: Pod "pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:01:05.599: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  7 11:01:05.811: INFO: Waiting for pod pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:01:05.862: INFO: Pod pod-configmaps-1ce6beba-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:01:05.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-864qk" for this suite.
Feb  7 11:01:12.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:01:12.061: INFO: namespace: e2e-tests-configmap-864qk, resource: bindings, ignored listing per whitelist
Feb  7 11:01:12.222: INFO: namespace e2e-tests-configmap-864qk deletion completed in 6.336709952s

• [SLOW TEST:17.246 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:01:12.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:01:12.378: INFO: Creating deployment "test-recreate-deployment"
Feb  7 11:01:12.395: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  7 11:01:12.482: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb  7 11:01:14.524: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  7 11:01:14.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:01:16.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:01:18.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:01:20.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716670072, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:01:22.573: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  7 11:01:22.625: INFO: Updating deployment test-recreate-deployment
Feb  7 11:01:22.626: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  7 11:01:24.032: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-h8xd7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h8xd7/deployments/test-recreate-deployment,UID:27259c31-4999-11ea-a994-fa163e34d433,ResourceVersion:20852390,Generation:2,CreationTimestamp:2020-02-07 11:01:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-07 11:01:23 +0000 UTC 2020-02-07 11:01:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-07 11:01:23 +0000 UTC 2020-02-07 11:01:12 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  7 11:01:24.080: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-h8xd7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h8xd7/replicasets/test-recreate-deployment-589c4bfd,UID:2d784c3e-4999-11ea-a994-fa163e34d433,ResourceVersion:20852386,Generation:1,CreationTimestamp:2020-02-07 11:01:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 27259c31-4999-11ea-a994-fa163e34d433 0xc001d79b7f 0xc001d79ba0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 11:01:24.080: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  7 11:01:24.080: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-h8xd7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h8xd7/replicasets/test-recreate-deployment-5bf7f65dc,UID:27364162-4999-11ea-a994-fa163e34d433,ResourceVersion:20852378,Generation:2,CreationTimestamp:2020-02-07 11:01:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 27259c31-4999-11ea-a994-fa163e34d433 0xc001d79c90 0xc001d79c91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 11:01:24.154: INFO: Pod "test-recreate-deployment-589c4bfd-mlfwf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-mlfwf,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-h8xd7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h8xd7/pods/test-recreate-deployment-589c4bfd-mlfwf,UID:2d81a135-4999-11ea-a994-fa163e34d433,ResourceVersion:20852392,Generation:0,CreationTimestamp:2020-02-07 11:01:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 2d784c3e-4999-11ea-a994-fa163e34d433 0xc0022205ff 0xc002220610}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-224jk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-224jk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-224jk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002220670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002220690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:01:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:01:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:01:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:01:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-07 11:01:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:01:24.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-h8xd7" for this suite.
Feb  7 11:01:34.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:01:34.347: INFO: namespace: e2e-tests-deployment-h8xd7, resource: bindings, ignored listing per whitelist
Feb  7 11:01:34.373: INFO: namespace e2e-tests-deployment-h8xd7 deletion completed in 10.200966964s

• [SLOW TEST:22.151 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:01:34.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-346936a3-4999-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:01:34.714: INFO: Waiting up to 5m0s for pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-l2ks2" to be "success or failure"
Feb  7 11:01:34.796: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.546527ms
Feb  7 11:01:36.811: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096608672s
Feb  7 11:01:38.840: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126057015s
Feb  7 11:01:40.883: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168776907s
Feb  7 11:01:42.899: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185160146s
Feb  7 11:01:45.449: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.73478567s
Feb  7 11:01:47.462: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.747510641s
STEP: Saw pod success
Feb  7 11:01:47.462: INFO: Pod "pod-configmaps-346c2177-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:01:47.466: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-346c2177-4999-11ea-abae-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  7 11:01:49.680: INFO: Waiting for pod pod-configmaps-346c2177-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:01:49.697: INFO: Pod pod-configmaps-346c2177-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:01:49.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-l2ks2" for this suite.
Feb  7 11:01:55.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:01:55.858: INFO: namespace: e2e-tests-configmap-l2ks2, resource: bindings, ignored listing per whitelist
Feb  7 11:01:55.976: INFO: namespace e2e-tests-configmap-l2ks2 deletion completed in 6.250750363s

• [SLOW TEST:21.603 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:01:55.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4wzjq
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-4wzjq
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-4wzjq
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-4wzjq
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-4wzjq
Feb  7 11:02:08.417: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4wzjq, name: ss-0, uid: 46c79847-4999-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb  7 11:02:12.509: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4wzjq, name: ss-0, uid: 46c79847-4999-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 11:02:12.704: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4wzjq, name: ss-0, uid: 46c79847-4999-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb  7 11:02:12.721: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-4wzjq
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-4wzjq
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-4wzjq and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  7 11:02:25.106: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4wzjq
Feb  7 11:02:25.112: INFO: Scaling statefulset ss to 0
Feb  7 11:02:35.165: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 11:02:35.182: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:02:35.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4wzjq" for this suite.
Feb  7 11:02:41.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:02:41.342: INFO: namespace: e2e-tests-statefulset-4wzjq, resource: bindings, ignored listing per whitelist
Feb  7 11:02:41.443: INFO: namespace e2e-tests-statefulset-4wzjq deletion completed in 6.211915903s

• [SLOW TEST:45.467 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:02:41.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb  7 11:02:42.441: INFO: created pod pod-service-account-defaultsa
Feb  7 11:02:42.441: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  7 11:02:42.638: INFO: created pod pod-service-account-mountsa
Feb  7 11:02:42.638: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  7 11:02:42.694: INFO: created pod pod-service-account-nomountsa
Feb  7 11:02:42.694: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  7 11:02:42.878: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  7 11:02:42.878: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  7 11:02:43.182: INFO: created pod pod-service-account-mountsa-mountspec
Feb  7 11:02:43.182: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  7 11:02:43.243: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  7 11:02:43.243: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  7 11:02:43.502: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  7 11:02:43.503: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  7 11:02:43.548: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  7 11:02:43.548: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  7 11:02:45.484: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  7 11:02:45.484: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:02:45.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-g7xxt" for this suite.
Feb  7 11:03:22.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:03:22.732: INFO: namespace: e2e-tests-svcaccounts-g7xxt, resource: bindings, ignored listing per whitelist
Feb  7 11:03:22.767: INFO: namespace e2e-tests-svcaccounts-g7xxt deletion completed in 36.357607721s

• [SLOW TEST:41.324 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:03:22.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  7 11:03:23.063: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  7 11:03:28.079: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:03:29.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-6gvn2" for this suite.
Feb  7 11:03:42.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:03:42.763: INFO: namespace: e2e-tests-replication-controller-6gvn2, resource: bindings, ignored listing per whitelist
Feb  7 11:03:42.963: INFO: namespace e2e-tests-replication-controller-6gvn2 deletion completed in 13.381248585s

• [SLOW TEST:20.196 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:03:42.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  7 11:03:43.167: INFO: Waiting up to 5m0s for pod "pod-81034688-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-6n9ft" to be "success or failure"
Feb  7 11:03:43.176: INFO: Pod "pod-81034688-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.700165ms
Feb  7 11:03:45.415: INFO: Pod "pod-81034688-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247368936s
Feb  7 11:03:47.423: INFO: Pod "pod-81034688-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25574379s
Feb  7 11:03:49.450: INFO: Pod "pod-81034688-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282788077s
Feb  7 11:03:51.471: INFO: Pod "pod-81034688-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.30368195s
STEP: Saw pod success
Feb  7 11:03:51.471: INFO: Pod "pod-81034688-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:03:51.478: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-81034688-4999-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:03:51.624: INFO: Waiting for pod pod-81034688-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:03:51.632: INFO: Pod pod-81034688-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:03:51.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6n9ft" for this suite.
Feb  7 11:03:57.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:03:57.790: INFO: namespace: e2e-tests-emptydir-6n9ft, resource: bindings, ignored listing per whitelist
Feb  7 11:03:57.838: INFO: namespace e2e-tests-emptydir-6n9ft deletion completed in 6.198463202s

• [SLOW TEST:14.875 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:03:57.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0207 11:04:28.957374       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 11:04:28.957: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:04:28.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-6hf5q" for this suite.
Feb  7 11:04:39.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:04:39.828: INFO: namespace: e2e-tests-gc-6hf5q, resource: bindings, ignored listing per whitelist
Feb  7 11:04:39.908: INFO: namespace e2e-tests-gc-6hf5q deletion completed in 10.943748766s

• [SLOW TEST:42.069 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:04:39.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb  7 11:04:40.812: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix576233773/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:04:40.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tzsqz" for this suite.
Feb  7 11:04:47.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:04:47.142: INFO: namespace: e2e-tests-kubectl-tzsqz, resource: bindings, ignored listing per whitelist
Feb  7 11:04:47.391: INFO: namespace e2e-tests-kubectl-tzsqz deletion completed in 6.391242825s

• [SLOW TEST:7.483 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:04:47.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb  7 11:04:47.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tdbjv'
Feb  7 11:04:49.878: INFO: stderr: ""
Feb  7 11:04:49.878: INFO: stdout: "pod/pause created\n"
Feb  7 11:04:49.879: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  7 11:04:49.879: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-tdbjv" to be "running and ready"
Feb  7 11:04:49.976: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 96.988532ms
Feb  7 11:04:52.143: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264217643s
Feb  7 11:04:54.175: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295739207s
Feb  7 11:04:56.382: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503279444s
Feb  7 11:04:58.402: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522696202s
Feb  7 11:05:00.428: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.548705486s
Feb  7 11:05:00.428: INFO: Pod "pause" satisfied condition "running and ready"
Feb  7 11:05:00.428: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  7 11:05:00.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-tdbjv'
Feb  7 11:05:00.681: INFO: stderr: ""
Feb  7 11:05:00.681: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  7 11:05:00.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tdbjv'
Feb  7 11:05:00.839: INFO: stderr: ""
Feb  7 11:05:00.839: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  7 11:05:00.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-tdbjv'
Feb  7 11:05:00.968: INFO: stderr: ""
Feb  7 11:05:00.968: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  7 11:05:00.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tdbjv'
Feb  7 11:05:01.103: INFO: stderr: ""
Feb  7 11:05:01.103: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb  7 11:05:01.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tdbjv'
Feb  7 11:05:01.221: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 11:05:01.222: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  7 11:05:01.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-tdbjv'
Feb  7 11:05:01.426: INFO: stderr: "No resources found.\n"
Feb  7 11:05:01.426: INFO: stdout: ""
Feb  7 11:05:01.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-tdbjv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 11:05:01.525: INFO: stderr: ""
Feb  7 11:05:01.525: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:05:01.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tdbjv" for this suite.
Feb  7 11:05:07.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:05:07.664: INFO: namespace: e2e-tests-kubectl-tdbjv, resource: bindings, ignored listing per whitelist
Feb  7 11:05:07.747: INFO: namespace e2e-tests-kubectl-tdbjv deletion completed in 6.206179508s

• [SLOW TEST:20.355 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:05:07.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  7 11:05:08.054: INFO: Waiting up to 5m0s for pod "pod-b391208e-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-s8tw2" to be "success or failure"
Feb  7 11:05:08.063: INFO: Pod "pod-b391208e-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.021644ms
Feb  7 11:05:10.080: INFO: Pod "pod-b391208e-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02567831s
Feb  7 11:05:12.102: INFO: Pod "pod-b391208e-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048333236s
Feb  7 11:05:14.123: INFO: Pod "pod-b391208e-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069025055s
Feb  7 11:05:16.148: INFO: Pod "pod-b391208e-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093722834s
Feb  7 11:05:18.165: INFO: Pod "pod-b391208e-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110804852s
STEP: Saw pod success
Feb  7 11:05:18.165: INFO: Pod "pod-b391208e-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:05:18.171: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b391208e-4999-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:05:18.253: INFO: Waiting for pod pod-b391208e-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:05:18.524: INFO: Pod pod-b391208e-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:05:18.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-s8tw2" for this suite.
Feb  7 11:05:25.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:05:25.439: INFO: namespace: e2e-tests-emptydir-s8tw2, resource: bindings, ignored listing per whitelist
Feb  7 11:05:25.513: INFO: namespace e2e-tests-emptydir-s8tw2 deletion completed in 6.954176024s

• [SLOW TEST:17.766 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:05:25.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-be334692-4999-11ea-abae-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:05:37.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-59dfn" for this suite.
Feb  7 11:06:02.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:06:02.119: INFO: namespace: e2e-tests-configmap-59dfn, resource: bindings, ignored listing per whitelist
Feb  7 11:06:02.168: INFO: namespace e2e-tests-configmap-59dfn deletion completed in 24.19673385s

• [SLOW TEST:36.654 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:06:02.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-d3fff15e-4999-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:06:02.410: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-6t9lt" to be "success or failure"
Feb  7 11:06:02.415: INFO: Pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.975814ms
Feb  7 11:06:04.486: INFO: Pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075152922s
Feb  7 11:06:06.509: INFO: Pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098594825s
Feb  7 11:06:08.725: INFO: Pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314328593s
Feb  7 11:06:10.750: INFO: Pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.339731025s
Feb  7 11:06:12.767: INFO: Pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.356397656s
STEP: Saw pod success
Feb  7 11:06:12.767: INFO: Pod "pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:06:12.771: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 11:06:12.933: INFO: Waiting for pod pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:06:12.942: INFO: Pod pod-projected-configmaps-d4013845-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:06:12.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6t9lt" for this suite.
Feb  7 11:06:18.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:06:19.045: INFO: namespace: e2e-tests-projected-6t9lt, resource: bindings, ignored listing per whitelist
Feb  7 11:06:19.097: INFO: namespace e2e-tests-projected-6t9lt deletion completed in 6.14984203s

• [SLOW TEST:16.929 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:06:19.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb  7 11:06:19.422: INFO: Waiting up to 5m0s for pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-var-expansion-99mhm" to be "success or failure"
Feb  7 11:06:19.599: INFO: Pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 177.468559ms
Feb  7 11:06:21.655: INFO: Pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233471436s
Feb  7 11:06:23.669: INFO: Pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246971969s
Feb  7 11:06:25.756: INFO: Pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334388666s
Feb  7 11:06:28.012: INFO: Pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.590400657s
Feb  7 11:06:30.030: INFO: Pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.608205704s
STEP: Saw pod success
Feb  7 11:06:30.030: INFO: Pod "var-expansion-de233de6-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:06:30.037: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-de233de6-4999-11ea-abae-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  7 11:06:30.215: INFO: Waiting for pod var-expansion-de233de6-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:06:30.251: INFO: Pod var-expansion-de233de6-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:06:30.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-99mhm" for this suite.
Feb  7 11:06:36.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:06:36.651: INFO: namespace: e2e-tests-var-expansion-99mhm, resource: bindings, ignored listing per whitelist
Feb  7 11:06:36.701: INFO: namespace e2e-tests-var-expansion-99mhm deletion completed in 6.434634768s

• [SLOW TEST:17.604 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:06:36.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-twp4x/configmap-test-e8a394fa-4999-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:06:37.042: INFO: Waiting up to 5m0s for pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-twp4x" to be "success or failure"
Feb  7 11:06:37.055: INFO: Pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.518275ms
Feb  7 11:06:39.078: INFO: Pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035182272s
Feb  7 11:06:41.089: INFO: Pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046456579s
Feb  7 11:06:43.111: INFO: Pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068168476s
Feb  7 11:06:45.765: INFO: Pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.722726172s
Feb  7 11:06:47.789: INFO: Pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.74617704s
STEP: Saw pod success
Feb  7 11:06:47.789: INFO: Pod "pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:06:47.793: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005 container env-test: 
STEP: delete the pod
Feb  7 11:06:48.278: INFO: Waiting for pod pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:06:48.290: INFO: Pod pod-configmaps-e8a58e12-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:06:48.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-twp4x" for this suite.
Feb  7 11:06:54.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:06:54.397: INFO: namespace: e2e-tests-configmap-twp4x, resource: bindings, ignored listing per whitelist
Feb  7 11:06:54.603: INFO: namespace e2e-tests-configmap-twp4x deletion completed in 6.306118868s

• [SLOW TEST:17.902 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:06:54.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-f34c1a49-4999-11ea-abae-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-f34c1a18-4999-11ea-abae-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  7 11:06:54.945: INFO: Waiting up to 5m0s for pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-jb5cj" to be "success or failure"
Feb  7 11:06:55.013: INFO: Pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.171352ms
Feb  7 11:06:57.025: INFO: Pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079618929s
Feb  7 11:06:59.083: INFO: Pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137960121s
Feb  7 11:07:01.333: INFO: Pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387331293s
Feb  7 11:07:03.348: INFO: Pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.402926815s
Feb  7 11:07:05.362: INFO: Pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.416414629s
STEP: Saw pod success
Feb  7 11:07:05.362: INFO: Pod "projected-volume-f34c19a5-4999-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:07:05.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-f34c19a5-4999-11ea-abae-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Feb  7 11:07:06.124: INFO: Waiting for pod projected-volume-f34c19a5-4999-11ea-abae-0242ac110005 to disappear
Feb  7 11:07:06.218: INFO: Pod projected-volume-f34c19a5-4999-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:07:06.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jb5cj" for this suite.
Feb  7 11:07:14.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:07:14.310: INFO: namespace: e2e-tests-projected-jb5cj, resource: bindings, ignored listing per whitelist
Feb  7 11:07:14.394: INFO: namespace e2e-tests-projected-jb5cj deletion completed in 8.153960933s

• [SLOW TEST:19.790 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:07:14.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-24ljp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-24ljp to expose endpoints map[]
Feb  7 11:07:14.956: INFO: Get endpoints failed (9.387393ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  7 11:07:15.974: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ljp exposes endpoints map[] (1.027139865s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-24ljp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-24ljp to expose endpoints map[pod1:[100]]
Feb  7 11:07:20.509: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.515772565s elapsed, will retry)
Feb  7 11:07:23.784: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ljp exposes endpoints map[pod1:[100]] (7.790756077s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-24ljp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-24ljp to expose endpoints map[pod1:[100] pod2:[101]]
Feb  7 11:07:27.967: INFO: Unexpected endpoints: found map[ffdf0d40-4999-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (4.161944472s elapsed, will retry)
Feb  7 11:07:32.721: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ljp exposes endpoints map[pod1:[100] pod2:[101]] (8.916626751s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-24ljp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-24ljp to expose endpoints map[pod2:[101]]
Feb  7 11:07:33.855: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ljp exposes endpoints map[pod2:[101]] (1.120721603s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-24ljp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-24ljp to expose endpoints map[]
Feb  7 11:07:34.933: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-24ljp exposes endpoints map[] (1.057511903s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:07:36.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-24ljp" for this suite.
Feb  7 11:08:00.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:08:00.775: INFO: namespace: e2e-tests-services-24ljp, resource: bindings, ignored listing per whitelist
Feb  7 11:08:00.782: INFO: namespace e2e-tests-services-24ljp deletion completed in 24.464094674s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.388 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:08:00.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb  7 11:08:00.990: INFO: Waiting up to 5m0s for pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005" in namespace "e2e-tests-containers-9hl4s" to be "success or failure"
Feb  7 11:08:01.024: INFO: Pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.473316ms
Feb  7 11:08:03.034: INFO: Pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04414922s
Feb  7 11:08:05.045: INFO: Pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054687611s
Feb  7 11:08:07.385: INFO: Pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395416618s
Feb  7 11:08:09.398: INFO: Pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.407719354s
Feb  7 11:08:11.411: INFO: Pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.421196154s
STEP: Saw pod success
Feb  7 11:08:11.411: INFO: Pod "client-containers-1aae1fc7-499a-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:08:11.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-1aae1fc7-499a-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:08:12.113: INFO: Waiting for pod client-containers-1aae1fc7-499a-11ea-abae-0242ac110005 to disappear
Feb  7 11:08:12.134: INFO: Pod client-containers-1aae1fc7-499a-11ea-abae-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:08:12.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9hl4s" for this suite.
Feb  7 11:08:18.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:08:18.815: INFO: namespace: e2e-tests-containers-9hl4s, resource: bindings, ignored listing per whitelist
Feb  7 11:08:18.839: INFO: namespace e2e-tests-containers-9hl4s deletion completed in 6.688075214s

• [SLOW TEST:18.057 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:08:18.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 11:08:18.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-p27vc'
Feb  7 11:08:19.094: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 11:08:19.094: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb  7 11:08:21.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-p27vc'
Feb  7 11:08:21.491: INFO: stderr: ""
Feb  7 11:08:21.491: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:08:21.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p27vc" for this suite.
Feb  7 11:08:27.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:08:27.888: INFO: namespace: e2e-tests-kubectl-p27vc, resource: bindings, ignored listing per whitelist
Feb  7 11:08:27.926: INFO: namespace e2e-tests-kubectl-p27vc deletion completed in 6.42783004s

• [SLOW TEST:9.087 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:08:27.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:08:28.803: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb  7 11:08:28.873: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-j5p9g/daemonsets","resourceVersion":"20853571"},"items":null}

Feb  7 11:08:28.878: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-j5p9g/pods","resourceVersion":"20853571"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:08:28.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-j5p9g" for this suite.
Feb  7 11:08:36.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:08:36.983: INFO: namespace: e2e-tests-daemonsets-j5p9g, resource: bindings, ignored listing per whitelist
Feb  7 11:08:37.057: INFO: namespace e2e-tests-daemonsets-j5p9g deletion completed in 8.163840986s

S [SKIPPING] [9.131 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb  7 11:08:28.803: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:08:37.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb  7 11:08:37.504: INFO: Waiting up to 5m0s for pod "client-containers-3064143a-499a-11ea-abae-0242ac110005" in namespace "e2e-tests-containers-srlkf" to be "success or failure"
Feb  7 11:08:37.532: INFO: Pod "client-containers-3064143a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.132579ms
Feb  7 11:08:39.651: INFO: Pod "client-containers-3064143a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147196945s
Feb  7 11:08:41.667: INFO: Pod "client-containers-3064143a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163187501s
Feb  7 11:08:43.686: INFO: Pod "client-containers-3064143a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182234701s
Feb  7 11:08:45.717: INFO: Pod "client-containers-3064143a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212834026s
Feb  7 11:08:48.146: INFO: Pod "client-containers-3064143a-499a-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.642102761s
STEP: Saw pod success
Feb  7 11:08:48.146: INFO: Pod "client-containers-3064143a-499a-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:08:48.158: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3064143a-499a-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:08:48.330: INFO: Waiting for pod client-containers-3064143a-499a-11ea-abae-0242ac110005 to disappear
Feb  7 11:08:48.441: INFO: Pod client-containers-3064143a-499a-11ea-abae-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:08:48.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-srlkf" for this suite.
Feb  7 11:08:54.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:08:54.678: INFO: namespace: e2e-tests-containers-srlkf, resource: bindings, ignored listing per whitelist
Feb  7 11:08:54.697: INFO: namespace e2e-tests-containers-srlkf deletion completed in 6.242318821s

• [SLOW TEST:17.639 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:08:54.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:08:54.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-2ncg5" to be "success or failure"
Feb  7 11:08:54.975: INFO: Pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.520064ms
Feb  7 11:08:56.987: INFO: Pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082269272s
Feb  7 11:08:59.009: INFO: Pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104495535s
Feb  7 11:09:01.023: INFO: Pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118497302s
Feb  7 11:09:03.039: INFO: Pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134400519s
Feb  7 11:09:05.063: INFO: Pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158662824s
STEP: Saw pod success
Feb  7 11:09:05.063: INFO: Pod "downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:09:05.071: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:09:05.244: INFO: Waiting for pod downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005 to disappear
Feb  7 11:09:05.257: INFO: Pod downwardapi-volume-3ad31d43-499a-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:09:05.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2ncg5" for this suite.
Feb  7 11:09:11.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:09:11.447: INFO: namespace: e2e-tests-downward-api-2ncg5, resource: bindings, ignored listing per whitelist
Feb  7 11:09:11.494: INFO: namespace e2e-tests-downward-api-2ncg5 deletion completed in 6.230331003s

• [SLOW TEST:16.796 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:09:11.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-k6nbv in namespace e2e-tests-proxy-6w4hp
I0207 11:09:11.910261       9 runners.go:184] Created replication controller with name: proxy-service-k6nbv, namespace: e2e-tests-proxy-6w4hp, replica count: 1
I0207 11:09:12.961523       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:13.962038       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:14.962386       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:15.962800       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:16.963327       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:17.963809       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:18.964252       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:19.964608       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 11:09:20.965044       9 runners.go:184] proxy-service-k6nbv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  7 11:09:20.976: INFO: setup took 9.197112406s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  7 11:09:21.010: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6w4hp/pods/proxy-service-k6nbv-5rc9n:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-hpdls
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 11:09:41.169: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 11:10:21.535: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-hpdls PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 11:10:21.535: INFO: >>> kubeConfig: /root/.kube/config
I0207 11:10:21.633236       9 log.go:172] (0xc000353d90) (0xc0021b6fa0) Create stream
I0207 11:10:21.633301       9 log.go:172] (0xc000353d90) (0xc0021b6fa0) Stream added, broadcasting: 1
I0207 11:10:21.640090       9 log.go:172] (0xc000353d90) Reply frame received for 1
I0207 11:10:21.640137       9 log.go:172] (0xc000353d90) (0xc0021b7040) Create stream
I0207 11:10:21.640170       9 log.go:172] (0xc000353d90) (0xc0021b7040) Stream added, broadcasting: 3
I0207 11:10:21.641855       9 log.go:172] (0xc000353d90) Reply frame received for 3
I0207 11:10:21.641900       9 log.go:172] (0xc000353d90) (0xc0021b7180) Create stream
I0207 11:10:21.641913       9 log.go:172] (0xc000353d90) (0xc0021b7180) Stream added, broadcasting: 5
I0207 11:10:21.643864       9 log.go:172] (0xc000353d90) Reply frame received for 5
I0207 11:10:21.952548       9 log.go:172] (0xc000353d90) Data frame received for 3
I0207 11:10:21.952697       9 log.go:172] (0xc0021b7040) (3) Data frame handling
I0207 11:10:21.952744       9 log.go:172] (0xc0021b7040) (3) Data frame sent
I0207 11:10:22.155638       9 log.go:172] (0xc000353d90) Data frame received for 1
I0207 11:10:22.155871       9 log.go:172] (0xc000353d90) (0xc0021b7040) Stream removed, broadcasting: 3
I0207 11:10:22.156047       9 log.go:172] (0xc0021b6fa0) (1) Data frame handling
I0207 11:10:22.156155       9 log.go:172] (0xc0021b6fa0) (1) Data frame sent
I0207 11:10:22.156188       9 log.go:172] (0xc000353d90) (0xc0021b7180) Stream removed, broadcasting: 5
I0207 11:10:22.156264       9 log.go:172] (0xc000353d90) (0xc0021b6fa0) Stream removed, broadcasting: 1
I0207 11:10:22.156483       9 log.go:172] (0xc000353d90) (0xc0021b6fa0) Stream removed, broadcasting: 1
I0207 11:10:22.156509       9 log.go:172] (0xc000353d90) (0xc0021b7040) Stream removed, broadcasting: 3
I0207 11:10:22.156517       9 log.go:172] (0xc000353d90) (0xc0021b7180) Stream removed, broadcasting: 5
Feb  7 11:10:22.156: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:10:22.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0207 11:10:22.158121       9 log.go:172] (0xc000353d90) Go away received
STEP: Destroying namespace "e2e-tests-pod-network-test-hpdls" for this suite.
Feb  7 11:10:48.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:10:48.560: INFO: namespace: e2e-tests-pod-network-test-hpdls, resource: bindings, ignored listing per whitelist
Feb  7 11:10:48.599: INFO: namespace e2e-tests-pod-network-test-hpdls deletion completed in 26.414509656s

• [SLOW TEST:67.533 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:10:48.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-7eaabd5d-499a-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 11:10:48.807: INFO: Waiting up to 5m0s for pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-qlrq5" to be "success or failure"
Feb  7 11:10:48.841: INFO: Pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.602439ms
Feb  7 11:10:50.938: INFO: Pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130946797s
Feb  7 11:10:52.954: INFO: Pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147608541s
Feb  7 11:10:54.981: INFO: Pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17401424s
Feb  7 11:10:57.446: INFO: Pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63970575s
Feb  7 11:10:59.456: INFO: Pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.649729114s
STEP: Saw pod success
Feb  7 11:10:59.456: INFO: Pod "pod-secrets-7eae136a-499a-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:10:59.462: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7eae136a-499a-11ea-abae-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  7 11:11:00.276: INFO: Waiting for pod pod-secrets-7eae136a-499a-11ea-abae-0242ac110005 to disappear
Feb  7 11:11:00.777: INFO: Pod pod-secrets-7eae136a-499a-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:11:00.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qlrq5" for this suite.
Feb  7 11:11:06.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:11:07.140: INFO: namespace: e2e-tests-secrets-qlrq5, resource: bindings, ignored listing per whitelist
Feb  7 11:11:07.147: INFO: namespace e2e-tests-secrets-qlrq5 deletion completed in 6.343269198s

• [SLOW TEST:18.548 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:11:07.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:11:07.337: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  7 11:11:07.420: INFO: Number of nodes with available pods: 0
Feb  7 11:11:07.420: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  7 11:11:07.522: INFO: Number of nodes with available pods: 0
Feb  7 11:11:07.522: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:08.550: INFO: Number of nodes with available pods: 0
Feb  7 11:11:08.551: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:09.541: INFO: Number of nodes with available pods: 0
Feb  7 11:11:09.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:10.549: INFO: Number of nodes with available pods: 0
Feb  7 11:11:10.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:11.534: INFO: Number of nodes with available pods: 0
Feb  7 11:11:11.534: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:12.559: INFO: Number of nodes with available pods: 0
Feb  7 11:11:12.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:13.621: INFO: Number of nodes with available pods: 0
Feb  7 11:11:13.621: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:14.900: INFO: Number of nodes with available pods: 0
Feb  7 11:11:14.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:15.664: INFO: Number of nodes with available pods: 0
Feb  7 11:11:15.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:16.542: INFO: Number of nodes with available pods: 0
Feb  7 11:11:16.542: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:17.573: INFO: Number of nodes with available pods: 0
Feb  7 11:11:17.573: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:18.561: INFO: Number of nodes with available pods: 0
Feb  7 11:11:18.561: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:19.551: INFO: Number of nodes with available pods: 1
Feb  7 11:11:19.551: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  7 11:11:19.723: INFO: Number of nodes with available pods: 1
Feb  7 11:11:19.723: INFO: Number of running nodes: 0, number of available pods: 1
Feb  7 11:11:20.741: INFO: Number of nodes with available pods: 0
Feb  7 11:11:20.741: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  7 11:11:20.799: INFO: Number of nodes with available pods: 0
Feb  7 11:11:20.799: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:21.920: INFO: Number of nodes with available pods: 0
Feb  7 11:11:21.920: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:22.828: INFO: Number of nodes with available pods: 0
Feb  7 11:11:22.828: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:23.811: INFO: Number of nodes with available pods: 0
Feb  7 11:11:23.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:24.819: INFO: Number of nodes with available pods: 0
Feb  7 11:11:24.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:25.822: INFO: Number of nodes with available pods: 0
Feb  7 11:11:25.822: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:26.815: INFO: Number of nodes with available pods: 0
Feb  7 11:11:26.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:27.822: INFO: Number of nodes with available pods: 0
Feb  7 11:11:27.822: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:28.812: INFO: Number of nodes with available pods: 0
Feb  7 11:11:28.812: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:29.829: INFO: Number of nodes with available pods: 0
Feb  7 11:11:29.829: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:30.815: INFO: Number of nodes with available pods: 0
Feb  7 11:11:30.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:31.829: INFO: Number of nodes with available pods: 0
Feb  7 11:11:31.829: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:32.818: INFO: Number of nodes with available pods: 0
Feb  7 11:11:32.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:33.831: INFO: Number of nodes with available pods: 0
Feb  7 11:11:33.831: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:34.815: INFO: Number of nodes with available pods: 0
Feb  7 11:11:34.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:35.848: INFO: Number of nodes with available pods: 0
Feb  7 11:11:35.848: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:36.819: INFO: Number of nodes with available pods: 0
Feb  7 11:11:36.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:38.033: INFO: Number of nodes with available pods: 0
Feb  7 11:11:38.033: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:38.814: INFO: Number of nodes with available pods: 0
Feb  7 11:11:38.814: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:39.831: INFO: Number of nodes with available pods: 0
Feb  7 11:11:39.831: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:40.812: INFO: Number of nodes with available pods: 0
Feb  7 11:11:40.812: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 11:11:41.809: INFO: Number of nodes with available pods: 1
Feb  7 11:11:41.809: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lm8p7, will wait for the garbage collector to delete the pods
Feb  7 11:11:41.943: INFO: Deleting DaemonSet.extensions daemon-set took: 69.148351ms
Feb  7 11:11:42.043: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.310364ms
Feb  7 11:11:49.386: INFO: Number of nodes with available pods: 0
Feb  7 11:11:49.386: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 11:11:49.423: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lm8p7/daemonsets","resourceVersion":"20854032"},"items":null}

Feb  7 11:11:49.445: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lm8p7/pods","resourceVersion":"20854032"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:11:49.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lm8p7" for this suite.
Feb  7 11:11:55.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:11:55.741: INFO: namespace: e2e-tests-daemonsets-lm8p7, resource: bindings, ignored listing per whitelist
Feb  7 11:11:55.834: INFO: namespace e2e-tests-daemonsets-lm8p7 deletion completed in 6.317793639s

• [SLOW TEST:48.687 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:11:55.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-rh4bd.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rh4bd.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rh4bd.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-rh4bd.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rh4bd.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rh4bd.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 11:12:12.127: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.150: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.202: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.218: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.229: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.237: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.246: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rh4bd.svc.cluster.local from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.256: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.263: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.269: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-a6cacfac-499a-11ea-abae-0242ac110005)
Feb  7 11:12:12.355: INFO: Lookups using e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rh4bd.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord]

Feb  7 11:12:17.494: INFO: DNS probes using e2e-tests-dns-rh4bd/dns-test-a6cacfac-499a-11ea-abae-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:12:17.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-rh4bd" for this suite.
Feb  7 11:12:25.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:12:25.774: INFO: namespace: e2e-tests-dns-rh4bd, resource: bindings, ignored listing per whitelist
Feb  7 11:12:25.924: INFO: namespace e2e-tests-dns-rh4bd deletion completed in 8.280289716s

• [SLOW TEST:30.090 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:12:25.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb  7 11:12:26.638: INFO: Waiting up to 5m0s for pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv" in namespace "e2e-tests-svcaccounts-m8ljc" to be "success or failure"
Feb  7 11:12:26.738: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 99.394873ms
Feb  7 11:12:28.751: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113012592s
Feb  7 11:12:30.770: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13149358s
Feb  7 11:12:33.045: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406846376s
Feb  7 11:12:35.532: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.893752357s
Feb  7 11:12:37.553: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.915076591s
Feb  7 11:12:39.576: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.937235611s
Feb  7 11:12:41.650: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Pending", Reason="", readiness=false. Elapsed: 15.012034965s
Feb  7 11:12:43.694: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.055938093s
STEP: Saw pod success
Feb  7 11:12:43.694: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv" satisfied condition "success or failure"
Feb  7 11:12:43.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv container token-test: 
STEP: delete the pod
Feb  7 11:12:44.104: INFO: Waiting for pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv to disappear
Feb  7 11:12:44.116: INFO: Pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-bdrsv no longer exists
STEP: Creating a pod to test consume service account root CA
Feb  7 11:12:44.140: INFO: Waiting up to 5m0s for pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2" in namespace "e2e-tests-svcaccounts-m8ljc" to be "success or failure"
Feb  7 11:12:44.179: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.064136ms
Feb  7 11:12:46.192: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052128279s
Feb  7 11:12:48.208: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067990476s
Feb  7 11:12:50.224: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083545867s
Feb  7 11:12:52.444: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.303589512s
Feb  7 11:12:54.467: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326824557s
Feb  7 11:12:56.488: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.347452701s
Feb  7 11:12:58.533: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.392321863s
Feb  7 11:13:01.937: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.79702079s
STEP: Saw pod success
Feb  7 11:13:01.938: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2" satisfied condition "success or failure"
Feb  7 11:13:01.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2 container root-ca-test: 
STEP: delete the pod
Feb  7 11:13:02.523: INFO: Waiting for pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2 to disappear
Feb  7 11:13:02.529: INFO: Pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-nfcm2 no longer exists
STEP: Creating a pod to test consume service account namespace
Feb  7 11:13:02.758: INFO: Waiting up to 5m0s for pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz" in namespace "e2e-tests-svcaccounts-m8ljc" to be "success or failure"
Feb  7 11:13:02.776: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.286263ms
Feb  7 11:13:04.811: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052833911s
Feb  7 11:13:06.832: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073959018s
Feb  7 11:13:09.456: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697701683s
Feb  7 11:13:11.481: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722949167s
Feb  7 11:13:13.546: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.787653059s
Feb  7 11:13:15.826: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.067872091s
Feb  7 11:13:17.880: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Pending", Reason="", readiness=false. Elapsed: 15.121341891s
Feb  7 11:13:19.899: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.140572673s
STEP: Saw pod success
Feb  7 11:13:19.899: INFO: Pod "pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz" satisfied condition "success or failure"
Feb  7 11:13:19.905: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz container namespace-test: 
STEP: delete the pod
Feb  7 11:13:19.962: INFO: Waiting for pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz to disappear
Feb  7 11:13:20.040: INFO: Pod pod-service-account-b900ea72-499a-11ea-abae-0242ac110005-fj2bz no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:13:20.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-m8ljc" for this suite.
Feb  7 11:13:28.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:13:28.374: INFO: namespace: e2e-tests-svcaccounts-m8ljc, resource: bindings, ignored listing per whitelist
Feb  7 11:13:28.426: INFO: namespace e2e-tests-svcaccounts-m8ljc deletion completed in 8.376078118s

• [SLOW TEST:62.501 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:13:28.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-7wwh
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 11:13:28.908: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7wwh" in namespace "e2e-tests-subpath-mm7jn" to be "success or failure"
Feb  7 11:13:28.931: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 22.390602ms
Feb  7 11:13:30.949: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040495645s
Feb  7 11:13:33.003: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094706125s
Feb  7 11:13:35.389: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480868159s
Feb  7 11:13:37.399: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49086993s
Feb  7 11:13:39.416: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.507924594s
Feb  7 11:13:42.095: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 13.186270012s
Feb  7 11:13:44.103: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Pending", Reason="", readiness=false. Elapsed: 15.194512167s
Feb  7 11:13:46.128: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 17.219210323s
Feb  7 11:13:48.147: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 19.238537453s
Feb  7 11:13:50.177: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 21.268141735s
Feb  7 11:13:52.200: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 23.291560886s
Feb  7 11:13:54.214: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 25.305310652s
Feb  7 11:13:56.232: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 27.323034232s
Feb  7 11:13:58.245: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 29.336575689s
Feb  7 11:14:00.265: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 31.356391512s
Feb  7 11:14:02.288: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Running", Reason="", readiness=false. Elapsed: 33.379136474s
Feb  7 11:14:04.317: INFO: Pod "pod-subpath-test-configmap-7wwh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.408615986s
STEP: Saw pod success
Feb  7 11:14:04.317: INFO: Pod "pod-subpath-test-configmap-7wwh" satisfied condition "success or failure"
Feb  7 11:14:04.329: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-7wwh container test-container-subpath-configmap-7wwh: 
STEP: delete the pod
Feb  7 11:14:05.577: INFO: Waiting for pod pod-subpath-test-configmap-7wwh to disappear
Feb  7 11:14:05.899: INFO: Pod pod-subpath-test-configmap-7wwh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7wwh
Feb  7 11:14:05.899: INFO: Deleting pod "pod-subpath-test-configmap-7wwh" in namespace "e2e-tests-subpath-mm7jn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:14:05.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mm7jn" for this suite.
Feb  7 11:14:14.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:14:14.112: INFO: namespace: e2e-tests-subpath-mm7jn, resource: bindings, ignored listing per whitelist
Feb  7 11:14:14.214: INFO: namespace e2e-tests-subpath-mm7jn deletion completed in 8.256511501s

• [SLOW TEST:45.787 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:14:14.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  7 11:14:14.597: INFO: Waiting up to 5m0s for pod "pod-f94d4963-499a-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-6c2x5" to be "success or failure"
Feb  7 11:14:14.630: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.301718ms
Feb  7 11:14:16.689: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09156358s
Feb  7 11:14:18.733: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13588363s
Feb  7 11:14:21.742: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.144575215s
Feb  7 11:14:23.798: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.200674221s
Feb  7 11:14:25.832: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.234987151s
Feb  7 11:14:27.853: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.255416408s
STEP: Saw pod success
Feb  7 11:14:27.853: INFO: Pod "pod-f94d4963-499a-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:14:27.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f94d4963-499a-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:14:28.215: INFO: Waiting for pod pod-f94d4963-499a-11ea-abae-0242ac110005 to disappear
Feb  7 11:14:28.258: INFO: Pod pod-f94d4963-499a-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:14:28.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6c2x5" for this suite.
Feb  7 11:14:36.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:14:36.527: INFO: namespace: e2e-tests-emptydir-6c2x5, resource: bindings, ignored listing per whitelist
Feb  7 11:14:36.637: INFO: namespace e2e-tests-emptydir-6c2x5 deletion completed in 8.373223393s

• [SLOW TEST:22.423 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:14:36.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rwlhx
Feb  7 11:14:46.847: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rwlhx
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 11:14:46.854: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:18:47.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rwlhx" for this suite.
Feb  7 11:18:53.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:18:53.951: INFO: namespace: e2e-tests-container-probe-rwlhx, resource: bindings, ignored listing per whitelist
Feb  7 11:18:54.087: INFO: namespace e2e-tests-container-probe-rwlhx deletion completed in 6.507078857s

• [SLOW TEST:257.450 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:18:54.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:18:54.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:19:02.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zvsvf" for this suite.
Feb  7 11:19:56.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:19:56.814: INFO: namespace: e2e-tests-pods-zvsvf, resource: bindings, ignored listing per whitelist
Feb  7 11:19:56.826: INFO: namespace e2e-tests-pods-zvsvf deletion completed in 54.216805591s

• [SLOW TEST:62.738 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:19:56.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-c59530b2-499b-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 11:19:57.318: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-5r96g" to be "success or failure"
Feb  7 11:19:57.324: INFO: Pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029843ms
Feb  7 11:19:59.345: INFO: Pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026404712s
Feb  7 11:20:01.355: INFO: Pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036202817s
Feb  7 11:20:03.489: INFO: Pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170616413s
Feb  7 11:20:05.954: INFO: Pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635932303s
Feb  7 11:20:08.189: INFO: Pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.871110816s
STEP: Saw pod success
Feb  7 11:20:08.190: INFO: Pod "pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:20:08.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 11:20:08.373: INFO: Waiting for pod pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005 to disappear
Feb  7 11:20:08.378: INFO: Pod pod-projected-secrets-c5a46ac1-499b-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:20:08.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5r96g" for this suite.
Feb  7 11:20:14.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:20:14.567: INFO: namespace: e2e-tests-projected-5r96g, resource: bindings, ignored listing per whitelist
Feb  7 11:20:14.629: INFO: namespace e2e-tests-projected-5r96g deletion completed in 6.244329447s

• [SLOW TEST:17.803 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:20:14.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-ds4x
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 11:20:14.877: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ds4x" in namespace "e2e-tests-subpath-dztvh" to be "success or failure"
Feb  7 11:20:15.069: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 191.044758ms
Feb  7 11:20:17.742: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.864732162s
Feb  7 11:20:19.783: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.905012576s
Feb  7 11:20:21.826: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.948231866s
Feb  7 11:20:23.840: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962259221s
Feb  7 11:20:25.863: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.984941363s
Feb  7 11:20:28.208: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 13.33034688s
Feb  7 11:20:30.222: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.344587325s
Feb  7 11:20:32.240: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 17.362280025s
Feb  7 11:20:34.258: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 19.380256746s
Feb  7 11:20:36.309: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 21.43118418s
Feb  7 11:20:38.323: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 23.445480248s
Feb  7 11:20:40.359: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 25.481474097s
Feb  7 11:20:42.389: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 27.510984615s
Feb  7 11:20:44.402: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 29.524209216s
Feb  7 11:20:46.416: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 31.538380489s
Feb  7 11:20:48.782: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Running", Reason="", readiness=false. Elapsed: 33.904400364s
Feb  7 11:20:50.817: INFO: Pod "pod-subpath-test-downwardapi-ds4x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.939499707s
STEP: Saw pod success
Feb  7 11:20:50.817: INFO: Pod "pod-subpath-test-downwardapi-ds4x" satisfied condition "success or failure"
Feb  7 11:20:50.837: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-ds4x container test-container-subpath-downwardapi-ds4x: 
STEP: delete the pod
Feb  7 11:20:51.645: INFO: Waiting for pod pod-subpath-test-downwardapi-ds4x to disappear
Feb  7 11:20:51.680: INFO: Pod pod-subpath-test-downwardapi-ds4x no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-ds4x
Feb  7 11:20:51.680: INFO: Deleting pod "pod-subpath-test-downwardapi-ds4x" in namespace "e2e-tests-subpath-dztvh"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:20:51.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dztvh" for this suite.
Feb  7 11:20:57.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:20:57.886: INFO: namespace: e2e-tests-subpath-dztvh, resource: bindings, ignored listing per whitelist
Feb  7 11:20:57.899: INFO: namespace e2e-tests-subpath-dztvh deletion completed in 6.203833041s

• [SLOW TEST:43.270 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:20:57.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0207 11:21:03.456415       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 11:21:03.456: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:21:03.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-zzpk7" for this suite.
Feb  7 11:21:12.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:21:12.245: INFO: namespace: e2e-tests-gc-zzpk7, resource: bindings, ignored listing per whitelist
Feb  7 11:21:12.394: INFO: namespace e2e-tests-gc-zzpk7 deletion completed in 8.927156882s

• [SLOW TEST:14.494 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:21:12.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f28bf64b-499b-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:21:12.648: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-cwt47" to be "success or failure"
Feb  7 11:21:12.653: INFO: Pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985665ms
Feb  7 11:21:14.777: INFO: Pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128949215s
Feb  7 11:21:16.799: INFO: Pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151468673s
Feb  7 11:21:19.216: INFO: Pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.568335837s
Feb  7 11:21:21.234: INFO: Pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.586550943s
Feb  7 11:21:23.253: INFO: Pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.605102815s
STEP: Saw pod success
Feb  7 11:21:23.253: INFO: Pod "pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:21:23.267: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 11:21:23.422: INFO: Waiting for pod pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005 to disappear
Feb  7 11:21:23.445: INFO: Pod pod-projected-configmaps-f28e1c44-499b-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:21:23.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cwt47" for this suite.
Feb  7 11:21:29.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:21:29.689: INFO: namespace: e2e-tests-projected-cwt47, resource: bindings, ignored listing per whitelist
Feb  7 11:21:29.709: INFO: namespace e2e-tests-projected-cwt47 deletion completed in 6.248744112s

• [SLOW TEST:17.315 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:21:29.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  7 11:21:29.963: INFO: Waiting up to 5m0s for pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-rpshx" to be "success or failure"
Feb  7 11:21:30.028: INFO: Pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 65.27366ms
Feb  7 11:21:32.256: INFO: Pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293701964s
Feb  7 11:21:34.295: INFO: Pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332127454s
Feb  7 11:21:36.441: INFO: Pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478133048s
Feb  7 11:21:38.472: INFO: Pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.508822921s
Feb  7 11:21:40.615: INFO: Pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.652134091s
STEP: Saw pod success
Feb  7 11:21:40.615: INFO: Pod "pod-fcdf9ffe-499b-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:21:41.032: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fcdf9ffe-499b-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:21:41.120: INFO: Waiting for pod pod-fcdf9ffe-499b-11ea-abae-0242ac110005 to disappear
Feb  7 11:21:41.193: INFO: Pod pod-fcdf9ffe-499b-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:21:41.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rpshx" for this suite.
Feb  7 11:21:47.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:21:47.427: INFO: namespace: e2e-tests-emptydir-rpshx, resource: bindings, ignored listing per whitelist
Feb  7 11:21:47.438: INFO: namespace e2e-tests-emptydir-rpshx deletion completed in 6.238991743s

• [SLOW TEST:17.729 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:21:47.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  7 11:21:47.721: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 11:21:47.849: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 11:21:47.856: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  7 11:21:47.876: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  7 11:21:47.876: INFO: 	Container coredns ready: true, restart count 0
Feb  7 11:21:47.876: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  7 11:21:47.876: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 11:21:47.876: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  7 11:21:47.876: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  7 11:21:47.876: INFO: 	Container weave ready: true, restart count 0
Feb  7 11:21:47.876: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 11:21:47.876: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  7 11:21:47.876: INFO: 	Container coredns ready: true, restart count 0
Feb  7 11:21:47.876: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  7 11:21:47.876: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  7 11:21:47.876: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f11ae358bda354], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:21:48.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-sgtws" for this suite.
Feb  7 11:21:54.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:21:55.089: INFO: namespace: e2e-tests-sched-pred-sgtws, resource: bindings, ignored listing per whitelist
Feb  7 11:21:55.135: INFO: namespace e2e-tests-sched-pred-sgtws deletion completed in 6.191131909s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.698 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:21:55.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:21:55.388: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:21:56.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-6rz4q" for this suite.
Feb  7 11:22:02.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:22:02.986: INFO: namespace: e2e-tests-custom-resource-definition-6rz4q, resource: bindings, ignored listing per whitelist
Feb  7 11:22:03.017: INFO: namespace e2e-tests-custom-resource-definition-6rz4q deletion completed in 6.409156415s

• [SLOW TEST:7.881 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:22:03.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-10c13d50-499c-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:22:03.417: INFO: Waiting up to 5m0s for pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-t4zt9" to be "success or failure"
Feb  7 11:22:03.443: INFO: Pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.133447ms
Feb  7 11:22:05.456: INFO: Pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039392945s
Feb  7 11:22:07.480: INFO: Pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063239804s
Feb  7 11:22:09.500: INFO: Pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08341941s
Feb  7 11:22:11.555: INFO: Pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138719503s
Feb  7 11:22:13.566: INFO: Pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148986059s
STEP: Saw pod success
Feb  7 11:22:13.566: INFO: Pod "pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:22:13.571: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  7 11:22:14.693: INFO: Waiting for pod pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005 to disappear
Feb  7 11:22:14.708: INFO: Pod pod-configmaps-10c304f1-499c-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:22:14.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-t4zt9" for this suite.
Feb  7 11:22:20.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:22:21.124: INFO: namespace: e2e-tests-configmap-t4zt9, resource: bindings, ignored listing per whitelist
Feb  7 11:22:21.185: INFO: namespace e2e-tests-configmap-t4zt9 deletion completed in 6.463993478s

• [SLOW TEST:18.168 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:22:21.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  7 11:22:31.447: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1b83c05c-499c-11ea-abae-0242ac110005,GenerateName:,Namespace:e2e-tests-events-k2jvh,SelfLink:/api/v1/namespaces/e2e-tests-events-k2jvh/pods/send-events-1b83c05c-499c-11ea-abae-0242ac110005,UID:1b89c2a9-499c-11ea-a994-fa163e34d433,ResourceVersion:20855260,Generation:0,CreationTimestamp:2020-02-07 11:22:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 356696745,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-b7wmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b7wmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-b7wmz true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00250cfe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00250d000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:22:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:22:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:22:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:22:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-07 11:22:21 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-07 11:22:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://002d64ef170c13555c60392e9aea0781befd24917e624f37a05e814afee9343d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  7 11:22:33.486: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  7 11:22:35.501: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:22:35.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-k2jvh" for this suite.
Feb  7 11:23:15.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:23:15.858: INFO: namespace: e2e-tests-events-k2jvh, resource: bindings, ignored listing per whitelist
Feb  7 11:23:15.928: INFO: namespace e2e-tests-events-k2jvh deletion completed in 40.378980291s

• [SLOW TEST:54.743 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:23:15.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-3c3676ac-499c-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 11:23:16.252: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-ml7cj" to be "success or failure"
Feb  7 11:23:16.327: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.570131ms
Feb  7 11:23:18.751: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.498615995s
Feb  7 11:23:20.763: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.511385137s
Feb  7 11:23:22.780: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528236779s
Feb  7 11:23:24.807: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555093711s
Feb  7 11:23:26.979: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.727006683s
Feb  7 11:23:28.997: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.744662922s
STEP: Saw pod success
Feb  7 11:23:28.997: INFO: Pod "pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:23:29.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  7 11:23:29.104: INFO: Waiting for pod pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005 to disappear
Feb  7 11:23:29.134: INFO: Pod pod-projected-secrets-3c37802a-499c-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:23:29.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ml7cj" for this suite.
Feb  7 11:23:35.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:23:35.433: INFO: namespace: e2e-tests-projected-ml7cj, resource: bindings, ignored listing per whitelist
Feb  7 11:23:35.488: INFO: namespace e2e-tests-projected-ml7cj deletion completed in 6.341184983s

• [SLOW TEST:19.558 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:23:35.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0207 11:23:52.063522       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 11:23:52.063: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:23:52.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-zp8fd" for this suite.
Feb  7 11:24:18.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:24:18.316: INFO: namespace: e2e-tests-gc-zp8fd, resource: bindings, ignored listing per whitelist
Feb  7 11:24:18.531: INFO: namespace e2e-tests-gc-zp8fd deletion completed in 26.455396166s

• [SLOW TEST:43.043 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:24:18.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  7 11:24:18.961: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:24:42.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cdmxh" for this suite.
Feb  7 11:25:06.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:25:06.395: INFO: namespace: e2e-tests-init-container-cdmxh, resource: bindings, ignored listing per whitelist
Feb  7 11:25:06.609: INFO: namespace e2e-tests-init-container-cdmxh deletion completed in 24.290063866s

• [SLOW TEST:48.078 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:25:06.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  7 11:25:08.312: INFO: Pod name wrapped-volume-race-7ee2006a-499c-11ea-abae-0242ac110005: Found 0 pods out of 5
Feb  7 11:25:14.300: INFO: Pod name wrapped-volume-race-7ee2006a-499c-11ea-abae-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7ee2006a-499c-11ea-abae-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vtr55, will wait for the garbage collector to delete the pods
Feb  7 11:27:04.517: INFO: Deleting ReplicationController wrapped-volume-race-7ee2006a-499c-11ea-abae-0242ac110005 took: 30.351864ms
Feb  7 11:27:04.818: INFO: Terminating ReplicationController wrapped-volume-race-7ee2006a-499c-11ea-abae-0242ac110005 pods took: 300.767097ms
STEP: Creating RC which spawns configmap-volume pods
Feb  7 11:27:52.824: INFO: Pod name wrapped-volume-race-e105bdfa-499c-11ea-abae-0242ac110005: Found 0 pods out of 5
Feb  7 11:27:57.905: INFO: Pod name wrapped-volume-race-e105bdfa-499c-11ea-abae-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e105bdfa-499c-11ea-abae-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vtr55, will wait for the garbage collector to delete the pods
Feb  7 11:30:22.219: INFO: Deleting ReplicationController wrapped-volume-race-e105bdfa-499c-11ea-abae-0242ac110005 took: 54.542417ms
Feb  7 11:30:22.520: INFO: Terminating ReplicationController wrapped-volume-race-e105bdfa-499c-11ea-abae-0242ac110005 pods took: 300.936034ms
STEP: Creating RC which spawns configmap-volume pods
Feb  7 11:31:13.070: INFO: Pod name wrapped-volume-race-58693f3d-499d-11ea-abae-0242ac110005: Found 0 pods out of 5
Feb  7 11:31:18.092: INFO: Pod name wrapped-volume-race-58693f3d-499d-11ea-abae-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-58693f3d-499d-11ea-abae-0242ac110005 in namespace e2e-tests-emptydir-wrapper-vtr55, will wait for the garbage collector to delete the pods
Feb  7 11:33:02.257: INFO: Deleting ReplicationController wrapped-volume-race-58693f3d-499d-11ea-abae-0242ac110005 took: 25.45002ms
Feb  7 11:33:03.358: INFO: Terminating ReplicationController wrapped-volume-race-58693f3d-499d-11ea-abae-0242ac110005 pods took: 1.100594158s
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:33:54.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vtr55" for this suite.
Feb  7 11:34:04.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:34:05.016: INFO: namespace: e2e-tests-emptydir-wrapper-vtr55, resource: bindings, ignored listing per whitelist
Feb  7 11:34:05.016: INFO: namespace e2e-tests-emptydir-wrapper-vtr55 deletion completed in 10.317854239s

• [SLOW TEST:538.407 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:34:05.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:34:05.388: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  7 11:34:05.625: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  7 11:34:11.031: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 11:34:21.091: INFO: Creating deployment "test-rolling-update-deployment"
Feb  7 11:34:21.101: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  7 11:34:21.117: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  7 11:34:23.265: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  7 11:34:23.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:34:25.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:34:27.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:34:29.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716672061, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 11:34:31.338: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  7 11:34:31.360: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-rb4gp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rb4gp/deployments/test-rolling-update-deployment,UID:c8833d34-499d-11ea-a994-fa163e34d433,ResourceVersion:20856764,Generation:1,CreationTimestamp:2020-02-07 11:34:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-07 11:34:21 +0000 UTC 2020-02-07 11:34:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-07 11:34:31 +0000 UTC 2020-02-07 11:34:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  7 11:34:31.365: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-rb4gp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rb4gp/replicasets/test-rolling-update-deployment-75db98fb4c,UID:c88abf3f-499d-11ea-a994-fa163e34d433,ResourceVersion:20856755,Generation:1,CreationTimestamp:2020-02-07 11:34:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c8833d34-499d-11ea-a994-fa163e34d433 0xc0017f6a67 0xc0017f6a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  7 11:34:31.365: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  7 11:34:31.366: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-rb4gp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rb4gp/replicasets/test-rolling-update-controller,UID:bf270b02-499d-11ea-a994-fa163e34d433,ResourceVersion:20856763,Generation:2,CreationTimestamp:2020-02-07 11:34:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c8833d34-499d-11ea-a994-fa163e34d433 0xc0017f698f 0xc0017f69a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 11:34:31.372: INFO: Pod "test-rolling-update-deployment-75db98fb4c-k9cm6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-k9cm6,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-rb4gp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rb4gp/pods/test-rolling-update-deployment-75db98fb4c-k9cm6,UID:c88bc17f-499d-11ea-a994-fa163e34d433,ResourceVersion:20856754,Generation:0,CreationTimestamp:2020-02-07 11:34:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c c88abf3f-499d-11ea-a994-fa163e34d433 0xc001d6dad7 0xc001d6dad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5tw6m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tw6m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-5tw6m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6de00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6de20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:34:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:34:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:34:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 11:34:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-07 11:34:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-07 11:34:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://68032cbf3ae79c6986ef9eb794c729878b2e8abd70b1e56aa76ae29d66f5b000}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:34:31.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rb4gp" for this suite.
Feb  7 11:34:40.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:34:40.115: INFO: namespace: e2e-tests-deployment-rb4gp, resource: bindings, ignored listing per whitelist
Feb  7 11:34:40.257: INFO: namespace e2e-tests-deployment-rb4gp deletion completed in 8.874708138s

• [SLOW TEST:35.240 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:34:40.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0207 11:34:51.834020       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 11:34:51.834: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:34:51.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-c9zz7" for this suite.
Feb  7 11:34:58.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:34:58.906: INFO: namespace: e2e-tests-gc-c9zz7, resource: bindings, ignored listing per whitelist
Feb  7 11:34:58.925: INFO: namespace e2e-tests-gc-c9zz7 deletion completed in 6.781521389s

• [SLOW TEST:18.668 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:34:58.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:34:59.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-rzfqk" to be "success or failure"
Feb  7 11:34:59.241: INFO: Pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 143.612176ms
Feb  7 11:35:01.397: INFO: Pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299433638s
Feb  7 11:35:03.434: INFO: Pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336088466s
Feb  7 11:35:05.620: INFO: Pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.522420466s
Feb  7 11:35:08.737: INFO: Pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.638982497s
Feb  7 11:35:10.779: INFO: Pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.680920406s
STEP: Saw pod success
Feb  7 11:35:10.779: INFO: Pod "downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:35:10.792: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:35:10.956: INFO: Waiting for pod downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005 to disappear
Feb  7 11:35:10.970: INFO: Pod downwardapi-volume-df23f836-499d-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:35:10.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rzfqk" for this suite.
Feb  7 11:35:18.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:35:18.277: INFO: namespace: e2e-tests-downward-api-rzfqk, resource: bindings, ignored listing per whitelist
Feb  7 11:35:18.371: INFO: namespace e2e-tests-downward-api-rzfqk deletion completed in 7.281237098s

• [SLOW TEST:19.446 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:35:18.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ead26f03-499d-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 11:35:18.819: INFO: Waiting up to 5m0s for pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-bphcl" to be "success or failure"
Feb  7 11:35:18.835: INFO: Pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.399073ms
Feb  7 11:35:20.866: INFO: Pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047217443s
Feb  7 11:35:22.945: INFO: Pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126339994s
Feb  7 11:35:24.964: INFO: Pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14437725s
Feb  7 11:35:26.979: INFO: Pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159890909s
Feb  7 11:35:28.998: INFO: Pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178438682s
STEP: Saw pod success
Feb  7 11:35:28.998: INFO: Pod "pod-secrets-eae960c7-499d-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:35:29.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-eae960c7-499d-11ea-abae-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  7 11:35:29.244: INFO: Waiting for pod pod-secrets-eae960c7-499d-11ea-abae-0242ac110005 to disappear
Feb  7 11:35:30.006: INFO: Pod pod-secrets-eae960c7-499d-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:35:30.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bphcl" for this suite.
Feb  7 11:35:36.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:35:36.450: INFO: namespace: e2e-tests-secrets-bphcl, resource: bindings, ignored listing per whitelist
Feb  7 11:35:36.588: INFO: namespace e2e-tests-secrets-bphcl deletion completed in 6.417388751s
STEP: Destroying namespace "e2e-tests-secret-namespace-zvsxt" for this suite.
Feb  7 11:35:42.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:35:42.679: INFO: namespace: e2e-tests-secret-namespace-zvsxt, resource: bindings, ignored listing per whitelist
Feb  7 11:35:42.836: INFO: namespace e2e-tests-secret-namespace-zvsxt deletion completed in 6.248374964s

• [SLOW TEST:24.465 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:35:42.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:35:43.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-4rr77" to be "success or failure"
Feb  7 11:35:43.115: INFO: Pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.330999ms
Feb  7 11:35:45.201: INFO: Pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09725267s
Feb  7 11:35:47.216: INFO: Pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112390891s
Feb  7 11:35:49.299: INFO: Pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194889481s
Feb  7 11:35:51.313: INFO: Pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208986008s
Feb  7 11:35:53.322: INFO: Pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218653369s
STEP: Saw pod success
Feb  7 11:35:53.322: INFO: Pod "downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:35:53.325: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:35:53.409: INFO: Waiting for pod downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005 to disappear
Feb  7 11:35:53.538: INFO: Pod downwardapi-volume-f962f161-499d-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:35:53.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4rr77" for this suite.
Feb  7 11:36:00.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:36:00.552: INFO: namespace: e2e-tests-downward-api-4rr77, resource: bindings, ignored listing per whitelist
Feb  7 11:36:00.636: INFO: namespace e2e-tests-downward-api-4rr77 deletion completed in 7.079740666s

• [SLOW TEST:17.799 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:36:00.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  7 11:36:11.661: INFO: Successfully updated pod "labelsupdate0408d370-499e-11ea-abae-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:36:13.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d5xbw" for this suite.
Feb  7 11:36:37.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:36:37.890: INFO: namespace: e2e-tests-downward-api-d5xbw, resource: bindings, ignored listing per whitelist
Feb  7 11:36:37.997: INFO: namespace e2e-tests-downward-api-d5xbw deletion completed in 24.255600893s

• [SLOW TEST:37.360 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:36:37.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:36:48.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-wd545" for this suite.
Feb  7 11:37:30.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:37:31.598: INFO: namespace: e2e-tests-kubelet-test-wd545, resource: bindings, ignored listing per whitelist
Feb  7 11:37:31.767: INFO: namespace e2e-tests-kubelet-test-wd545 deletion completed in 43.467576251s

• [SLOW TEST:53.770 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:37:31.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  7 11:37:32.001: INFO: Waiting up to 5m0s for pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-99kcz" to be "success or failure"
Feb  7 11:37:32.019: INFO: Pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.601753ms
Feb  7 11:37:34.239: INFO: Pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23796786s
Feb  7 11:37:36.264: INFO: Pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263438191s
Feb  7 11:37:38.534: INFO: Pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532923542s
Feb  7 11:37:40.638: INFO: Pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636791838s
Feb  7 11:37:42.674: INFO: Pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.672773397s
STEP: Saw pod success
Feb  7 11:37:42.674: INFO: Pod "pod-3a4ac8fa-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:37:42.686: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3a4ac8fa-499e-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:37:43.716: INFO: Waiting for pod pod-3a4ac8fa-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:37:43.931: INFO: Pod pod-3a4ac8fa-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:37:43.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-99kcz" for this suite.
Feb  7 11:37:50.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:37:50.326: INFO: namespace: e2e-tests-emptydir-99kcz, resource: bindings, ignored listing per whitelist
Feb  7 11:37:50.424: INFO: namespace e2e-tests-emptydir-99kcz deletion completed in 6.468580609s

• [SLOW TEST:18.657 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:37:50.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  7 11:37:50.807: INFO: Waiting up to 5m0s for pod "pod-457a97ae-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-d54j8" to be "success or failure"
Feb  7 11:37:50.827: INFO: Pod "pod-457a97ae-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.113855ms
Feb  7 11:37:52.842: INFO: Pod "pod-457a97ae-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034529829s
Feb  7 11:37:54.858: INFO: Pod "pod-457a97ae-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050638001s
Feb  7 11:37:56.950: INFO: Pod "pod-457a97ae-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143261555s
Feb  7 11:37:58.986: INFO: Pod "pod-457a97ae-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178686926s
Feb  7 11:38:01.044: INFO: Pod "pod-457a97ae-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.237075095s
STEP: Saw pod success
Feb  7 11:38:01.044: INFO: Pod "pod-457a97ae-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:38:01.060: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-457a97ae-499e-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:38:01.924: INFO: Waiting for pod pod-457a97ae-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:38:02.526: INFO: Pod pod-457a97ae-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:38:02.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-d54j8" for this suite.
Feb  7 11:38:08.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:38:08.954: INFO: namespace: e2e-tests-emptydir-d54j8, resource: bindings, ignored listing per whitelist
Feb  7 11:38:09.170: INFO: namespace e2e-tests-emptydir-d54j8 deletion completed in 6.506160498s

• [SLOW TEST:18.745 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:38:09.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  7 11:38:09.431: INFO: Waiting up to 5m0s for pod "pod-509a201c-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-sdsn8" to be "success or failure"
Feb  7 11:38:09.444: INFO: Pod "pod-509a201c-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.201998ms
Feb  7 11:38:11.465: INFO: Pod "pod-509a201c-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034405537s
Feb  7 11:38:13.481: INFO: Pod "pod-509a201c-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050526632s
Feb  7 11:38:15.747: INFO: Pod "pod-509a201c-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.31621895s
Feb  7 11:38:18.010: INFO: Pod "pod-509a201c-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.57882304s
Feb  7 11:38:20.097: INFO: Pod "pod-509a201c-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.666421249s
STEP: Saw pod success
Feb  7 11:38:20.097: INFO: Pod "pod-509a201c-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:38:20.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-509a201c-499e-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:38:20.561: INFO: Waiting for pod pod-509a201c-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:38:20.589: INFO: Pod pod-509a201c-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:38:20.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sdsn8" for this suite.
Feb  7 11:38:28.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:38:28.754: INFO: namespace: e2e-tests-emptydir-sdsn8, resource: bindings, ignored listing per whitelist
Feb  7 11:38:28.828: INFO: namespace e2e-tests-emptydir-sdsn8 deletion completed in 8.226790158s

• [SLOW TEST:19.658 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:38:28.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb  7 11:38:29.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  7 11:38:29.231: INFO: stderr: ""
Feb  7 11:38:29.231: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:38:29.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bxprz" for this suite.
Feb  7 11:38:35.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:38:35.357: INFO: namespace: e2e-tests-kubectl-bxprz, resource: bindings, ignored listing per whitelist
Feb  7 11:38:35.470: INFO: namespace e2e-tests-kubectl-bxprz deletion completed in 6.221372344s

• [SLOW TEST:6.641 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:38:35.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  7 11:38:35.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lp4pt'
Feb  7 11:38:37.395: INFO: stderr: ""
Feb  7 11:38:37.395: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  7 11:38:38.411: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:38.411: INFO: Found 0 / 1
Feb  7 11:38:39.488: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:39.488: INFO: Found 0 / 1
Feb  7 11:38:40.409: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:40.409: INFO: Found 0 / 1
Feb  7 11:38:41.462: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:41.462: INFO: Found 0 / 1
Feb  7 11:38:42.530: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:42.530: INFO: Found 0 / 1
Feb  7 11:38:44.169: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:44.170: INFO: Found 0 / 1
Feb  7 11:38:44.694: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:44.694: INFO: Found 0 / 1
Feb  7 11:38:45.418: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:45.418: INFO: Found 0 / 1
Feb  7 11:38:46.429: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:46.429: INFO: Found 0 / 1
Feb  7 11:38:47.413: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:47.413: INFO: Found 1 / 1
Feb  7 11:38:47.413: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  7 11:38:47.420: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:47.420: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  7 11:38:47.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tbl4l --namespace=e2e-tests-kubectl-lp4pt -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  7 11:38:47.592: INFO: stderr: ""
Feb  7 11:38:47.592: INFO: stdout: "pod/redis-master-tbl4l patched\n"
STEP: checking annotations
Feb  7 11:38:47.605: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:38:47.605: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:38:47.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lp4pt" for this suite.
Feb  7 11:39:11.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:39:11.719: INFO: namespace: e2e-tests-kubectl-lp4pt, resource: bindings, ignored listing per whitelist
Feb  7 11:39:11.853: INFO: namespace e2e-tests-kubectl-lp4pt deletion completed in 24.241597402s

• [SLOW TEST:36.383 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:39:11.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:39:24.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-snk7c" for this suite.
Feb  7 11:40:18.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:40:18.357: INFO: namespace: e2e-tests-kubelet-test-snk7c, resource: bindings, ignored listing per whitelist
Feb  7 11:40:18.547: INFO: namespace e2e-tests-kubelet-test-snk7c deletion completed in 54.372332438s

• [SLOW TEST:66.694 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:40:18.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:40:29.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-v5d7k" for this suite.
Feb  7 11:40:35.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:40:36.188: INFO: namespace: e2e-tests-emptydir-wrapper-v5d7k, resource: bindings, ignored listing per whitelist
Feb  7 11:40:36.207: INFO: namespace e2e-tests-emptydir-wrapper-v5d7k deletion completed in 6.445377352s

• [SLOW TEST:17.659 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:40:36.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  7 11:40:36.455: INFO: Waiting up to 5m0s for pod "pod-a83ab09b-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-gbbcv" to be "success or failure"
Feb  7 11:40:36.471: INFO: Pod "pod-a83ab09b-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.406817ms
Feb  7 11:40:38.653: INFO: Pod "pod-a83ab09b-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197781589s
Feb  7 11:40:40.671: INFO: Pod "pod-a83ab09b-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216057727s
Feb  7 11:40:42.689: INFO: Pod "pod-a83ab09b-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234197333s
Feb  7 11:40:44.741: INFO: Pod "pod-a83ab09b-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285890477s
Feb  7 11:40:46.766: INFO: Pod "pod-a83ab09b-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.311216183s
STEP: Saw pod success
Feb  7 11:40:46.766: INFO: Pod "pod-a83ab09b-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:40:46.781: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a83ab09b-499e-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:40:46.995: INFO: Waiting for pod pod-a83ab09b-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:40:47.081: INFO: Pod pod-a83ab09b-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:40:47.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gbbcv" for this suite.
Feb  7 11:40:53.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:40:53.243: INFO: namespace: e2e-tests-emptydir-gbbcv, resource: bindings, ignored listing per whitelist
Feb  7 11:40:53.313: INFO: namespace e2e-tests-emptydir-gbbcv deletion completed in 6.222102531s

• [SLOW TEST:17.105 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:40:53.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  7 11:41:14.957: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 11:41:14.973: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 11:41:16.973: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 11:41:17.181: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 11:41:18.974: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 11:41:18.989: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 11:41:20.974: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 11:41:20.989: INFO: Pod pod-with-prestop-http-hook still exists
Feb  7 11:41:22.974: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  7 11:41:22.988: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:41:23.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-slvxc" for this suite.
Feb  7 11:41:49.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:41:49.248: INFO: namespace: e2e-tests-container-lifecycle-hook-slvxc, resource: bindings, ignored listing per whitelist
Feb  7 11:41:49.283: INFO: namespace e2e-tests-container-lifecycle-hook-slvxc deletion completed in 26.255925212s

• [SLOW TEST:55.970 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:41:49.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-jrxz5/configmap-test-d3bde81b-499e-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:41:49.534: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-jrxz5" to be "success or failure"
Feb  7 11:41:49.546: INFO: Pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.114386ms
Feb  7 11:41:51.903: INFO: Pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368812806s
Feb  7 11:41:53.943: INFO: Pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.408954079s
Feb  7 11:41:55.977: INFO: Pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443434668s
Feb  7 11:41:57.994: INFO: Pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.459808541s
Feb  7 11:42:00.005: INFO: Pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.471079687s
STEP: Saw pod success
Feb  7 11:42:00.005: INFO: Pod "pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:42:00.010: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005 container env-test: 
STEP: delete the pod
Feb  7 11:42:00.104: INFO: Waiting for pod pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:42:00.112: INFO: Pod pod-configmaps-d3bea6b7-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:42:00.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jrxz5" for this suite.
Feb  7 11:42:06.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:42:06.377: INFO: namespace: e2e-tests-configmap-jrxz5, resource: bindings, ignored listing per whitelist
Feb  7 11:42:06.379: INFO: namespace e2e-tests-configmap-jrxz5 deletion completed in 6.254472887s

• [SLOW TEST:17.096 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:42:06.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb  7 11:42:06.612: INFO: Waiting up to 5m0s for pod "client-containers-ddf7fd36-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-containers-5dqqq" to be "success or failure"
Feb  7 11:42:06.650: INFO: Pod "client-containers-ddf7fd36-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.205139ms
Feb  7 11:42:08.664: INFO: Pod "client-containers-ddf7fd36-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051823712s
Feb  7 11:42:11.397: INFO: Pod "client-containers-ddf7fd36-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.785008988s
Feb  7 11:42:13.460: INFO: Pod "client-containers-ddf7fd36-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.848272868s
Feb  7 11:42:15.473: INFO: Pod "client-containers-ddf7fd36-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.860969221s
STEP: Saw pod success
Feb  7 11:42:15.473: INFO: Pod "client-containers-ddf7fd36-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:42:15.477: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ddf7fd36-499e-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 11:42:16.737: INFO: Waiting for pod client-containers-ddf7fd36-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:42:16.760: INFO: Pod client-containers-ddf7fd36-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:42:16.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-5dqqq" for this suite.
Feb  7 11:42:22.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:42:23.015: INFO: namespace: e2e-tests-containers-5dqqq, resource: bindings, ignored listing per whitelist
Feb  7 11:42:23.018: INFO: namespace e2e-tests-containers-5dqqq deletion completed in 6.251248226s

• [SLOW TEST:16.638 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:42:23.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:42:23.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-h2dtl" to be "success or failure"
Feb  7 11:42:23.279: INFO: Pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.484616ms
Feb  7 11:42:25.639: INFO: Pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401495122s
Feb  7 11:42:27.660: INFO: Pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422184025s
Feb  7 11:42:29.677: INFO: Pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438909138s
Feb  7 11:42:31.708: INFO: Pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.470441575s
Feb  7 11:42:33.780: INFO: Pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.542196511s
STEP: Saw pod success
Feb  7 11:42:33.780: INFO: Pod "downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:42:33.802: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:42:33.997: INFO: Waiting for pod downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:42:34.084: INFO: Pod downwardapi-volume-e7e19be1-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:42:34.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-h2dtl" for this suite.
Feb  7 11:42:40.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:42:40.214: INFO: namespace: e2e-tests-downward-api-h2dtl, resource: bindings, ignored listing per whitelist
Feb  7 11:42:40.305: INFO: namespace e2e-tests-downward-api-h2dtl deletion completed in 6.162229322s

• [SLOW TEST:17.287 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:42:40.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f22117a4-499e-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 11:42:40.605: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-pv974" to be "success or failure"
Feb  7 11:42:40.638: INFO: Pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.384948ms
Feb  7 11:42:42.669: INFO: Pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064260612s
Feb  7 11:42:44.706: INFO: Pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101122292s
Feb  7 11:42:47.098: INFO: Pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493747481s
Feb  7 11:42:49.138: INFO: Pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533260189s
Feb  7 11:42:51.153: INFO: Pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.54873803s
STEP: Saw pod success
Feb  7 11:42:51.154: INFO: Pod "pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:42:51.157: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 11:42:51.594: INFO: Waiting for pod pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005 to disappear
Feb  7 11:42:52.184: INFO: Pod pod-projected-configmaps-f23863b8-499e-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:42:52.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pv974" for this suite.
Feb  7 11:42:58.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:42:58.686: INFO: namespace: e2e-tests-projected-pv974, resource: bindings, ignored listing per whitelist
Feb  7 11:42:58.720: INFO: namespace e2e-tests-projected-pv974 deletion completed in 6.509243739s

• [SLOW TEST:18.415 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:42:58.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:42:59.168: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"fd2f30e8-499e-11ea-a994-fa163e34d433", Controller:(*bool)(0xc002008cc2), BlockOwnerDeletion:(*bool)(0xc002008cc3)}}
Feb  7 11:42:59.296: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"fd289a18-499e-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001fbc6b2), BlockOwnerDeletion:(*bool)(0xc001fbc6b3)}}
Feb  7 11:42:59.341: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"fd2c72d9-499e-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0025200fa), BlockOwnerDeletion:(*bool)(0xc0025200fb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:43:04.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-2rzmj" for this suite.
Feb  7 11:43:10.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:43:10.844: INFO: namespace: e2e-tests-gc-2rzmj, resource: bindings, ignored listing per whitelist
Feb  7 11:43:10.859: INFO: namespace e2e-tests-gc-2rzmj deletion completed in 6.296462133s

• [SLOW TEST:12.139 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:43:10.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rkt4t
Feb  7 11:43:21.357: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rkt4t
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 11:43:21.363: INFO: Initial restart count of pod liveness-http is 0
Feb  7 11:43:41.610: INFO: Restart count of pod e2e-tests-container-probe-rkt4t/liveness-http is now 1 (20.246793198s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:43:41.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rkt4t" for this suite.
Feb  7 11:43:47.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:43:47.969: INFO: namespace: e2e-tests-container-probe-rkt4t, resource: bindings, ignored listing per whitelist
Feb  7 11:43:48.080: INFO: namespace e2e-tests-container-probe-rkt4t deletion completed in 6.307661352s

• [SLOW TEST:37.221 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:43:48.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:43:48.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-rttlf" to be "success or failure"
Feb  7 11:43:48.411: INFO: Pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.27704ms
Feb  7 11:43:50.425: INFO: Pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099692892s
Feb  7 11:43:52.446: INFO: Pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120789956s
Feb  7 11:43:55.212: INFO: Pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.88620771s
Feb  7 11:43:57.219: INFO: Pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.893588717s
Feb  7 11:43:59.233: INFO: Pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907693673s
STEP: Saw pod success
Feb  7 11:43:59.233: INFO: Pod "downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:43:59.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:44:00.209: INFO: Waiting for pod downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005 to disappear
Feb  7 11:44:00.545: INFO: Pod downwardapi-volume-1a953f7b-499f-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:44:00.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rttlf" for this suite.
Feb  7 11:44:06.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:44:07.219: INFO: namespace: e2e-tests-projected-rttlf, resource: bindings, ignored listing per whitelist
Feb  7 11:44:07.252: INFO: namespace e2e-tests-projected-rttlf deletion completed in 6.674033276s

• [SLOW TEST:19.172 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:44:07.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  7 11:44:07.471: INFO: Waiting up to 5m0s for pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-2jjhr" to be "success or failure"
Feb  7 11:44:07.483: INFO: Pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.300949ms
Feb  7 11:44:09.581: INFO: Pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109794312s
Feb  7 11:44:11.601: INFO: Pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130088292s
Feb  7 11:44:14.039: INFO: Pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.567555675s
Feb  7 11:44:16.186: INFO: Pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714485921s
Feb  7 11:44:18.203: INFO: Pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.731663154s
STEP: Saw pod success
Feb  7 11:44:18.203: INFO: Pod "downward-api-25ff183c-499f-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:44:18.225: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-25ff183c-499f-11ea-abae-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  7 11:44:18.321: INFO: Waiting for pod downward-api-25ff183c-499f-11ea-abae-0242ac110005 to disappear
Feb  7 11:44:18.328: INFO: Pod downward-api-25ff183c-499f-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:44:18.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2jjhr" for this suite.
Feb  7 11:44:24.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:44:24.493: INFO: namespace: e2e-tests-downward-api-2jjhr, resource: bindings, ignored listing per whitelist
Feb  7 11:44:24.549: INFO: namespace e2e-tests-downward-api-2jjhr deletion completed in 6.20950387s

• [SLOW TEST:17.297 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:44:24.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  7 11:44:45.096: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:45.180: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:44:47.181: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:47.203: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:44:49.181: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:49.200: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:44:51.181: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:51.212: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:44:53.180: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:53.204: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:44:55.180: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:55.204: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:44:57.180: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:57.203: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:44:59.181: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:44:59.202: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:45:01.180: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:45:01.201: INFO: Pod pod-with-poststart-http-hook still exists
Feb  7 11:45:03.180: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  7 11:45:03.197: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:45:03.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qs4pn" for this suite.
Feb  7 11:45:27.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:45:27.306: INFO: namespace: e2e-tests-container-lifecycle-hook-qs4pn, resource: bindings, ignored listing per whitelist
Feb  7 11:45:27.413: INFO: namespace e2e-tests-container-lifecycle-hook-qs4pn deletion completed in 24.200204782s

• [SLOW TEST:62.863 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:45:27.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:45:53.728: INFO: Container started at 2020-02-07 11:45:36 +0000 UTC, pod became ready at 2020-02-07 11:45:53 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:45:53.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-c2lcf" for this suite.
Feb  7 11:46:15.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:46:16.182: INFO: namespace: e2e-tests-container-probe-c2lcf, resource: bindings, ignored listing per whitelist
Feb  7 11:46:16.264: INFO: namespace e2e-tests-container-probe-c2lcf deletion completed in 22.526912516s

• [SLOW TEST:48.851 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:46:16.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:46:16.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  7 11:46:16.862: INFO: stderr: ""
Feb  7 11:46:16.863: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:46:16.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5kmm2" for this suite.
Feb  7 11:46:22.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:46:23.097: INFO: namespace: e2e-tests-kubectl-5kmm2, resource: bindings, ignored listing per whitelist
Feb  7 11:46:23.104: INFO: namespace e2e-tests-kubectl-5kmm2 deletion completed in 6.206428776s

• [SLOW TEST:6.839 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:46:23.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-lblv6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lblv6 to expose endpoints map[]
Feb  7 11:46:23.365: INFO: Get endpoints failed (10.301697ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  7 11:46:24.374: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lblv6 exposes endpoints map[] (1.019928564s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-lblv6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lblv6 to expose endpoints map[pod1:[80]]
Feb  7 11:46:32.153: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (7.756253845s elapsed, will retry)
Feb  7 11:46:36.319: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lblv6 exposes endpoints map[pod1:[80]] (11.921626153s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-lblv6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lblv6 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  7 11:46:42.908: INFO: Unexpected endpoints: found map[77a08ceb-499f-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (6.552730923s elapsed, will retry)
Feb  7 11:46:44.990: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lblv6 exposes endpoints map[pod1:[80] pod2:[80]] (8.635316038s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-lblv6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lblv6 to expose endpoints map[pod2:[80]]
Feb  7 11:46:46.144: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lblv6 exposes endpoints map[pod2:[80]] (1.12744255s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-lblv6
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lblv6 to expose endpoints map[]
Feb  7 11:46:47.399: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lblv6 exposes endpoints map[] (1.205049697s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:46:47.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-lblv6" for this suite.
Feb  7 11:47:12.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:47:13.076: INFO: namespace: e2e-tests-services-lblv6, resource: bindings, ignored listing per whitelist
Feb  7 11:47:13.086: INFO: namespace e2e-tests-services-lblv6 deletion completed in 24.933387212s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.982 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:47:13.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb  7 11:47:13.569: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-vqnwt" to be "success or failure"
Feb  7 11:47:13.595: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.33551ms
Feb  7 11:47:15.618: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048701105s
Feb  7 11:47:17.628: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057991627s
Feb  7 11:47:19.733: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163053099s
Feb  7 11:47:22.417: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.847917995s
Feb  7 11:47:24.435: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.865403866s
Feb  7 11:47:26.517: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.947811848s
STEP: Saw pod success
Feb  7 11:47:26.518: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  7 11:47:26.549: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  7 11:47:26.766: INFO: Waiting for pod pod-host-path-test to disappear
Feb  7 11:47:26.821: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:47:26.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-vqnwt" for this suite.
Feb  7 11:47:32.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:47:32.951: INFO: namespace: e2e-tests-hostpath-vqnwt, resource: bindings, ignored listing per whitelist
Feb  7 11:47:33.101: INFO: namespace e2e-tests-hostpath-vqnwt deletion completed in 6.257708265s

• [SLOW TEST:20.015 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:47:33.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vhsql
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb  7 11:47:33.529: INFO: Found 0 stateful pods, waiting for 3
Feb  7 11:47:43.543: INFO: Found 2 stateful pods, waiting for 3
Feb  7 11:47:53.558: INFO: Found 2 stateful pods, waiting for 3
Feb  7 11:48:03.547: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 11:48:03.547: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 11:48:03.547: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 11:48:13.550: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 11:48:13.550: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 11:48:13.550: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 11:48:13.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vhsql ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 11:48:14.563: INFO: stderr: "I0207 11:48:13.890680     413 log.go:172] (0xc0007b4420) (0xc00063f360) Create stream\nI0207 11:48:13.890843     413 log.go:172] (0xc0007b4420) (0xc00063f360) Stream added, broadcasting: 1\nI0207 11:48:13.904152     413 log.go:172] (0xc0007b4420) Reply frame received for 1\nI0207 11:48:13.904192     413 log.go:172] (0xc0007b4420) (0xc000146000) Create stream\nI0207 11:48:13.904221     413 log.go:172] (0xc0007b4420) (0xc000146000) Stream added, broadcasting: 3\nI0207 11:48:13.905356     413 log.go:172] (0xc0007b4420) Reply frame received for 3\nI0207 11:48:13.905380     413 log.go:172] (0xc0007b4420) (0xc00069c000) Create stream\nI0207 11:48:13.905389     413 log.go:172] (0xc0007b4420) (0xc00069c000) Stream added, broadcasting: 5\nI0207 11:48:13.906504     413 log.go:172] (0xc0007b4420) Reply frame received for 5\nI0207 11:48:14.233019     413 log.go:172] (0xc0007b4420) Data frame received for 3\nI0207 11:48:14.233346     413 log.go:172] (0xc000146000) (3) Data frame handling\nI0207 11:48:14.233382     413 log.go:172] (0xc000146000) (3) Data frame sent\nI0207 11:48:14.555630     413 log.go:172] (0xc0007b4420) (0xc000146000) Stream removed, broadcasting: 3\nI0207 11:48:14.555730     413 log.go:172] (0xc0007b4420) Data frame received for 1\nI0207 11:48:14.555750     413 log.go:172] (0xc00063f360) (1) Data frame handling\nI0207 11:48:14.555767     413 log.go:172] (0xc00063f360) (1) Data frame sent\nI0207 11:48:14.555778     413 log.go:172] (0xc0007b4420) (0xc00063f360) Stream removed, broadcasting: 1\nI0207 11:48:14.555789     413 log.go:172] (0xc0007b4420) (0xc00069c000) Stream removed, broadcasting: 5\nI0207 11:48:14.555822     413 log.go:172] (0xc0007b4420) Go away received\nI0207 11:48:14.556030     413 log.go:172] (0xc0007b4420) (0xc00063f360) Stream removed, broadcasting: 1\nI0207 11:48:14.556053     413 log.go:172] (0xc0007b4420) (0xc000146000) Stream removed, broadcasting: 3\nI0207 11:48:14.556064     413 log.go:172] (0xc0007b4420) (0xc00069c000) Stream removed, broadcasting: 5\n"
Feb  7 11:48:14.563: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 11:48:14.563: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  7 11:48:14.729: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  7 11:48:24.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vhsql ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 11:48:25.292: INFO: stderr: "I0207 11:48:25.009390     436 log.go:172] (0xc00089e210) (0xc00089c5a0) Create stream\nI0207 11:48:25.009534     436 log.go:172] (0xc00089e210) (0xc00089c5a0) Stream added, broadcasting: 1\nI0207 11:48:25.014453     436 log.go:172] (0xc00089e210) Reply frame received for 1\nI0207 11:48:25.014497     436 log.go:172] (0xc00089e210) (0xc00070c000) Create stream\nI0207 11:48:25.014519     436 log.go:172] (0xc00089e210) (0xc00070c000) Stream added, broadcasting: 3\nI0207 11:48:25.015824     436 log.go:172] (0xc00089e210) Reply frame received for 3\nI0207 11:48:25.015863     436 log.go:172] (0xc00089e210) (0xc00070c0a0) Create stream\nI0207 11:48:25.015881     436 log.go:172] (0xc00089e210) (0xc00070c0a0) Stream added, broadcasting: 5\nI0207 11:48:25.016842     436 log.go:172] (0xc00089e210) Reply frame received for 5\nI0207 11:48:25.141689     436 log.go:172] (0xc00089e210) Data frame received for 3\nI0207 11:48:25.141825     436 log.go:172] (0xc00070c000) (3) Data frame handling\nI0207 11:48:25.141866     436 log.go:172] (0xc00070c000) (3) Data frame sent\nI0207 11:48:25.284506     436 log.go:172] (0xc00089e210) (0xc00070c000) Stream removed, broadcasting: 3\nI0207 11:48:25.284670     436 log.go:172] (0xc00089e210) Data frame received for 1\nI0207 11:48:25.284701     436 log.go:172] (0xc00089c5a0) (1) Data frame handling\nI0207 11:48:25.284713     436 log.go:172] (0xc00089c5a0) (1) Data frame sent\nI0207 11:48:25.284734     436 log.go:172] (0xc00089e210) (0xc00070c0a0) Stream removed, broadcasting: 5\nI0207 11:48:25.284762     436 log.go:172] (0xc00089e210) (0xc00089c5a0) Stream removed, broadcasting: 1\nI0207 11:48:25.284783     436 log.go:172] (0xc00089e210) Go away received\nI0207 11:48:25.285174     436 log.go:172] (0xc00089e210) (0xc00089c5a0) Stream removed, broadcasting: 1\nI0207 11:48:25.285256     436 log.go:172] (0xc00089e210) (0xc00070c000) Stream removed, broadcasting: 3\nI0207 11:48:25.285265     436 log.go:172] (0xc00089e210) (0xc00070c0a0) Stream removed, broadcasting: 5\n"
Feb  7 11:48:25.293: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 11:48:25.293: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 11:48:35.464: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:48:35.464: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 11:48:35.464: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 11:48:45.488: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:48:45.488: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 11:48:45.489: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 11:48:55.489: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:48:55.489: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 11:49:05.508: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:49:05.508: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 11:49:15.476: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  7 11:49:25.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vhsql ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 11:49:26.195: INFO: stderr: "I0207 11:49:25.750827     458 log.go:172] (0xc0008602c0) (0xc0007914a0) Create stream\nI0207 11:49:25.751007     458 log.go:172] (0xc0008602c0) (0xc0007914a0) Stream added, broadcasting: 1\nI0207 11:49:25.758597     458 log.go:172] (0xc0008602c0) Reply frame received for 1\nI0207 11:49:25.758635     458 log.go:172] (0xc0008602c0) (0xc000512000) Create stream\nI0207 11:49:25.758645     458 log.go:172] (0xc0008602c0) (0xc000512000) Stream added, broadcasting: 3\nI0207 11:49:25.759929     458 log.go:172] (0xc0008602c0) Reply frame received for 3\nI0207 11:49:25.759958     458 log.go:172] (0xc0008602c0) (0xc0003f0000) Create stream\nI0207 11:49:25.759965     458 log.go:172] (0xc0008602c0) (0xc0003f0000) Stream added, broadcasting: 5\nI0207 11:49:25.760818     458 log.go:172] (0xc0008602c0) Reply frame received for 5\nI0207 11:49:26.054998     458 log.go:172] (0xc0008602c0) Data frame received for 3\nI0207 11:49:26.055053     458 log.go:172] (0xc000512000) (3) Data frame handling\nI0207 11:49:26.055074     458 log.go:172] (0xc000512000) (3) Data frame sent\nI0207 11:49:26.188137     458 log.go:172] (0xc0008602c0) Data frame received for 1\nI0207 11:49:26.188237     458 log.go:172] (0xc0007914a0) (1) Data frame handling\nI0207 11:49:26.188263     458 log.go:172] (0xc0007914a0) (1) Data frame sent\nI0207 11:49:26.188285     458 log.go:172] (0xc0008602c0) (0xc0007914a0) Stream removed, broadcasting: 1\nI0207 11:49:26.188638     458 log.go:172] (0xc0008602c0) (0xc0003f0000) Stream removed, broadcasting: 5\nI0207 11:49:26.188698     458 log.go:172] (0xc0008602c0) (0xc000512000) Stream removed, broadcasting: 3\nI0207 11:49:26.188759     458 log.go:172] (0xc0008602c0) (0xc0007914a0) Stream removed, broadcasting: 1\nI0207 11:49:26.188770     458 log.go:172] (0xc0008602c0) (0xc000512000) Stream removed, broadcasting: 3\nI0207 11:49:26.188776     458 log.go:172] (0xc0008602c0) (0xc0003f0000) Stream removed, broadcasting: 5\nI0207 11:49:26.188917     458 log.go:172] (0xc0008602c0) Go away received\n"
Feb  7 11:49:26.195: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 11:49:26.195: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 11:49:36.293: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  7 11:49:46.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vhsql ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 11:49:47.062: INFO: stderr: "I0207 11:49:46.641894     480 log.go:172] (0xc00014c6e0) (0xc000740640) Create stream\nI0207 11:49:46.642137     480 log.go:172] (0xc00014c6e0) (0xc000740640) Stream added, broadcasting: 1\nI0207 11:49:46.649321     480 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0207 11:49:46.649396     480 log.go:172] (0xc00014c6e0) (0xc00059cd20) Create stream\nI0207 11:49:46.649412     480 log.go:172] (0xc00014c6e0) (0xc00059cd20) Stream added, broadcasting: 3\nI0207 11:49:46.650871     480 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0207 11:49:46.650898     480 log.go:172] (0xc00014c6e0) (0xc0007406e0) Create stream\nI0207 11:49:46.650905     480 log.go:172] (0xc00014c6e0) (0xc0007406e0) Stream added, broadcasting: 5\nI0207 11:49:46.652036     480 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0207 11:49:46.816761     480 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0207 11:49:46.816821     480 log.go:172] (0xc00059cd20) (3) Data frame handling\nI0207 11:49:46.816836     480 log.go:172] (0xc00059cd20) (3) Data frame sent\nI0207 11:49:47.052876     480 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0207 11:49:47.052959     480 log.go:172] (0xc000740640) (1) Data frame handling\nI0207 11:49:47.052978     480 log.go:172] (0xc000740640) (1) Data frame sent\nI0207 11:49:47.053079     480 log.go:172] (0xc00014c6e0) (0xc000740640) Stream removed, broadcasting: 1\nI0207 11:49:47.053717     480 log.go:172] (0xc00014c6e0) (0xc00059cd20) Stream removed, broadcasting: 3\nI0207 11:49:47.054168     480 log.go:172] (0xc00014c6e0) (0xc0007406e0) Stream removed, broadcasting: 5\nI0207 11:49:47.054308     480 log.go:172] (0xc00014c6e0) (0xc000740640) Stream removed, broadcasting: 1\nI0207 11:49:47.054337     480 log.go:172] (0xc00014c6e0) (0xc00059cd20) Stream removed, broadcasting: 3\nI0207 11:49:47.054449     480 log.go:172] (0xc00014c6e0) (0xc0007406e0) Stream removed, broadcasting: 5\nI0207 11:49:47.054851     480 log.go:172] (0xc00014c6e0) Go away received\n"
Feb  7 11:49:47.063: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 11:49:47.063: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 11:49:47.490: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:49:47.490: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:49:47.490: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:49:47.490: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:49:57.527: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:49:57.527: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:49:57.527: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:50:07.534: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:50:07.535: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:50:07.535: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:50:17.527: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:50:17.527: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:50:27.760: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
Feb  7 11:50:27.760: INFO: Waiting for Pod e2e-tests-statefulset-vhsql/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  7 11:50:37.521: INFO: Waiting for StatefulSet e2e-tests-statefulset-vhsql/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  7 11:50:47.534: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vhsql
Feb  7 11:50:47.540: INFO: Scaling statefulset ss2 to 0
Feb  7 11:51:17.579: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 11:51:17.587: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:51:17.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vhsql" for this suite.
Feb  7 11:51:25.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:51:25.832: INFO: namespace: e2e-tests-statefulset-vhsql, resource: bindings, ignored listing per whitelist
Feb  7 11:51:25.900: INFO: namespace e2e-tests-statefulset-vhsql deletion completed in 8.239761625s

• [SLOW TEST:232.798 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:51:25.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-fck5
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 11:51:26.115: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fck5" in namespace "e2e-tests-subpath-99czj" to be "success or failure"
Feb  7 11:51:26.132: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.776506ms
Feb  7 11:51:28.144: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028678473s
Feb  7 11:51:30.167: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052201098s
Feb  7 11:51:32.228: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113425978s
Feb  7 11:51:34.656: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541014237s
Feb  7 11:51:36.666: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.550578159s
Feb  7 11:51:39.275: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.159568294s
Feb  7 11:51:41.289: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.174466197s
Feb  7 11:51:43.314: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.198792887s
Feb  7 11:51:45.333: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 19.217979327s
Feb  7 11:51:47.350: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 21.23492538s
Feb  7 11:51:49.368: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 23.252753105s
Feb  7 11:51:51.385: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 25.269614005s
Feb  7 11:51:53.407: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 27.292400028s
Feb  7 11:51:55.451: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 29.336227212s
Feb  7 11:51:57.474: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 31.359196692s
Feb  7 11:51:59.496: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Running", Reason="", readiness=false. Elapsed: 33.381048924s
Feb  7 11:52:01.506: INFO: Pod "pod-subpath-test-secret-fck5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.390909165s
STEP: Saw pod success
Feb  7 11:52:01.506: INFO: Pod "pod-subpath-test-secret-fck5" satisfied condition "success or failure"
Feb  7 11:52:01.509: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-fck5 container test-container-subpath-secret-fck5: 
STEP: delete the pod
Feb  7 11:52:02.252: INFO: Waiting for pod pod-subpath-test-secret-fck5 to disappear
Feb  7 11:52:02.529: INFO: Pod pod-subpath-test-secret-fck5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-fck5
Feb  7 11:52:02.529: INFO: Deleting pod "pod-subpath-test-secret-fck5" in namespace "e2e-tests-subpath-99czj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:52:02.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-99czj" for this suite.
Feb  7 11:52:10.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:52:10.789: INFO: namespace: e2e-tests-subpath-99czj, resource: bindings, ignored listing per whitelist
Feb  7 11:52:10.927: INFO: namespace e2e-tests-subpath-99czj deletion completed in 8.354825533s

• [SLOW TEST:45.026 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:52:10.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:52:11.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-9kptb" to be "success or failure"
Feb  7 11:52:11.305: INFO: Pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 136.911613ms
Feb  7 11:52:14.512: INFO: Pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.343857223s
Feb  7 11:52:16.530: INFO: Pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.361851288s
Feb  7 11:52:18.622: INFO: Pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.453893953s
Feb  7 11:52:20.653: INFO: Pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.485082972s
Feb  7 11:52:22.684: INFO: Pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.516719114s
STEP: Saw pod success
Feb  7 11:52:22.685: INFO: Pod "downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:52:22.696: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:52:23.087: INFO: Waiting for pod downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005 to disappear
Feb  7 11:52:23.153: INFO: Pod downwardapi-volume-46500dad-49a0-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:52:23.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9kptb" for this suite.
Feb  7 11:52:29.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:52:29.415: INFO: namespace: e2e-tests-downward-api-9kptb, resource: bindings, ignored listing per whitelist
Feb  7 11:52:29.424: INFO: namespace e2e-tests-downward-api-9kptb deletion completed in 6.24982477s

• [SLOW TEST:18.497 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:52:29.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 11:52:29.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vqr8k'
Feb  7 11:52:31.412: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 11:52:31.412: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  7 11:52:33.663: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-h8qgd]
Feb  7 11:52:33.663: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-h8qgd" in namespace "e2e-tests-kubectl-vqr8k" to be "running and ready"
Feb  7 11:52:33.672: INFO: Pod "e2e-test-nginx-rc-h8qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.983845ms
Feb  7 11:52:35.704: INFO: Pod "e2e-test-nginx-rc-h8qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04123433s
Feb  7 11:52:37.950: INFO: Pod "e2e-test-nginx-rc-h8qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286935769s
Feb  7 11:52:39.975: INFO: Pod "e2e-test-nginx-rc-h8qgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312421303s
Feb  7 11:52:41.996: INFO: Pod "e2e-test-nginx-rc-h8qgd": Phase="Running", Reason="", readiness=true. Elapsed: 8.333628502s
Feb  7 11:52:41.996: INFO: Pod "e2e-test-nginx-rc-h8qgd" satisfied condition "running and ready"
Feb  7 11:52:41.996: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-h8qgd]
Feb  7 11:52:41.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vqr8k'
Feb  7 11:52:42.778: INFO: stderr: ""
Feb  7 11:52:42.778: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  7 11:52:42.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vqr8k'
Feb  7 11:52:42.926: INFO: stderr: ""
Feb  7 11:52:42.926: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:52:42.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vqr8k" for this suite.
Feb  7 11:53:06.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:53:06.993: INFO: namespace: e2e-tests-kubectl-vqr8k, resource: bindings, ignored listing per whitelist
Feb  7 11:53:07.124: INFO: namespace e2e-tests-kubectl-vqr8k deletion completed in 24.192409478s

• [SLOW TEST:37.700 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:53:07.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:53:07.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-f4n7j" to be "success or failure"
Feb  7 11:53:07.403: INFO: Pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.157255ms
Feb  7 11:53:09.915: INFO: Pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.536503819s
Feb  7 11:53:11.945: INFO: Pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566634244s
Feb  7 11:53:14.443: INFO: Pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.064640732s
Feb  7 11:53:16.570: INFO: Pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.191347242s
Feb  7 11:53:18.587: INFO: Pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.207916386s
STEP: Saw pod success
Feb  7 11:53:18.587: INFO: Pod "downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:53:18.593: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:53:19.351: INFO: Waiting for pod downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005 to disappear
Feb  7 11:53:19.392: INFO: Pod downwardapi-volume-67d22403-49a0-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:53:19.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f4n7j" for this suite.
Feb  7 11:53:25.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:53:25.653: INFO: namespace: e2e-tests-projected-f4n7j, resource: bindings, ignored listing per whitelist
Feb  7 11:53:25.817: INFO: namespace e2e-tests-projected-f4n7j deletion completed in 6.410926861s

• [SLOW TEST:18.693 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:53:25.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  7 11:53:26.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:26.356: INFO: stderr: ""
Feb  7 11:53:26.356: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 11:53:26.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:26.587: INFO: stderr: ""
Feb  7 11:53:26.588: INFO: stdout: "update-demo-nautilus-lbl9c update-demo-nautilus-lm22g "
Feb  7 11:53:26.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:26.742: INFO: stderr: ""
Feb  7 11:53:26.742: INFO: stdout: ""
Feb  7 11:53:26.742: INFO: update-demo-nautilus-lbl9c is created but not running
Feb  7 11:53:31.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:31.923: INFO: stderr: ""
Feb  7 11:53:31.923: INFO: stdout: "update-demo-nautilus-lbl9c update-demo-nautilus-lm22g "
Feb  7 11:53:31.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:32.059: INFO: stderr: ""
Feb  7 11:53:32.059: INFO: stdout: ""
Feb  7 11:53:32.059: INFO: update-demo-nautilus-lbl9c is created but not running
Feb  7 11:53:37.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:37.191: INFO: stderr: ""
Feb  7 11:53:37.191: INFO: stdout: "update-demo-nautilus-lbl9c update-demo-nautilus-lm22g "
Feb  7 11:53:37.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:37.312: INFO: stderr: ""
Feb  7 11:53:37.312: INFO: stdout: ""
Feb  7 11:53:37.312: INFO: update-demo-nautilus-lbl9c is created but not running
Feb  7 11:53:42.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:42.406: INFO: stderr: ""
Feb  7 11:53:42.406: INFO: stdout: "update-demo-nautilus-lbl9c update-demo-nautilus-lm22g "
Feb  7 11:53:42.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:42.501: INFO: stderr: ""
Feb  7 11:53:42.501: INFO: stdout: "true"
Feb  7 11:53:42.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:42.598: INFO: stderr: ""
Feb  7 11:53:42.598: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 11:53:42.598: INFO: validating pod update-demo-nautilus-lbl9c
Feb  7 11:53:42.653: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 11:53:42.653: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 11:53:42.653: INFO: update-demo-nautilus-lbl9c is verified up and running
Feb  7 11:53:42.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lm22g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:42.744: INFO: stderr: ""
Feb  7 11:53:42.744: INFO: stdout: "true"
Feb  7 11:53:42.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lm22g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:42.900: INFO: stderr: ""
Feb  7 11:53:42.900: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 11:53:42.900: INFO: validating pod update-demo-nautilus-lm22g
Feb  7 11:53:42.991: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 11:53:42.991: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 11:53:42.991: INFO: update-demo-nautilus-lm22g is verified up and running
STEP: scaling down the replication controller
Feb  7 11:53:42.995: INFO: scanned /root for discovery docs: 
Feb  7 11:53:42.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:44.162: INFO: stderr: ""
Feb  7 11:53:44.162: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 11:53:44.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:44.448: INFO: stderr: ""
Feb  7 11:53:44.448: INFO: stdout: "update-demo-nautilus-lbl9c update-demo-nautilus-lm22g "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  7 11:53:49.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:49.639: INFO: stderr: ""
Feb  7 11:53:49.639: INFO: stdout: "update-demo-nautilus-lbl9c "
Feb  7 11:53:49.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:49.817: INFO: stderr: ""
Feb  7 11:53:49.817: INFO: stdout: "true"
Feb  7 11:53:49.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:49.906: INFO: stderr: ""
Feb  7 11:53:49.906: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 11:53:49.906: INFO: validating pod update-demo-nautilus-lbl9c
Feb  7 11:53:49.915: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 11:53:49.915: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 11:53:49.915: INFO: update-demo-nautilus-lbl9c is verified up and running
STEP: scaling up the replication controller
Feb  7 11:53:49.918: INFO: scanned /root for discovery docs: 
Feb  7 11:53:49.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:51.089: INFO: stderr: ""
Feb  7 11:53:51.089: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 11:53:51.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:51.288: INFO: stderr: ""
Feb  7 11:53:51.288: INFO: stdout: "update-demo-nautilus-7w2r6 update-demo-nautilus-lbl9c "
Feb  7 11:53:51.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7w2r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:51.398: INFO: stderr: ""
Feb  7 11:53:51.398: INFO: stdout: ""
Feb  7 11:53:51.398: INFO: update-demo-nautilus-7w2r6 is created but not running
Feb  7 11:53:56.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:56.573: INFO: stderr: ""
Feb  7 11:53:56.573: INFO: stdout: "update-demo-nautilus-7w2r6 update-demo-nautilus-lbl9c "
Feb  7 11:53:56.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7w2r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:53:56.764: INFO: stderr: ""
Feb  7 11:53:56.764: INFO: stdout: ""
Feb  7 11:53:56.765: INFO: update-demo-nautilus-7w2r6 is created but not running
Feb  7 11:54:01.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:54:01.945: INFO: stderr: ""
Feb  7 11:54:01.945: INFO: stdout: "update-demo-nautilus-7w2r6 update-demo-nautilus-lbl9c "
Feb  7 11:54:01.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7w2r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:54:02.075: INFO: stderr: ""
Feb  7 11:54:02.075: INFO: stdout: "true"
Feb  7 11:54:02.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7w2r6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:54:02.187: INFO: stderr: ""
Feb  7 11:54:02.187: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 11:54:02.187: INFO: validating pod update-demo-nautilus-7w2r6
Feb  7 11:54:02.195: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 11:54:02.195: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 11:54:02.195: INFO: update-demo-nautilus-7w2r6 is verified up and running
Feb  7 11:54:02.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:54:02.304: INFO: stderr: ""
Feb  7 11:54:02.304: INFO: stdout: "true"
Feb  7 11:54:02.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbl9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:54:02.395: INFO: stderr: ""
Feb  7 11:54:02.395: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 11:54:02.396: INFO: validating pod update-demo-nautilus-lbl9c
Feb  7 11:54:02.408: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 11:54:02.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 11:54:02.408: INFO: update-demo-nautilus-lbl9c is verified up and running
STEP: using delete to clean up resources
Feb  7 11:54:02.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:54:02.640: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 11:54:02.641: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  7 11:54:02.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-n7c4h'
Feb  7 11:54:02.853: INFO: stderr: "No resources found.\n"
Feb  7 11:54:02.853: INFO: stdout: ""
Feb  7 11:54:02.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-n7c4h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 11:54:03.054: INFO: stderr: ""
Feb  7 11:54:03.054: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:54:03.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n7c4h" for this suite.
Feb  7 11:54:27.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:54:27.240: INFO: namespace: e2e-tests-kubectl-n7c4h, resource: bindings, ignored listing per whitelist
Feb  7 11:54:27.342: INFO: namespace e2e-tests-kubectl-n7c4h deletion completed in 24.267383237s

• [SLOW TEST:61.524 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:54:27.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  7 11:54:27.582: INFO: namespace e2e-tests-kubectl-625hr
Feb  7 11:54:27.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-625hr'
Feb  7 11:54:27.960: INFO: stderr: ""
Feb  7 11:54:27.960: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  7 11:54:28.969: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:28.969: INFO: Found 0 / 1
Feb  7 11:54:30.009: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:30.009: INFO: Found 0 / 1
Feb  7 11:54:31.005: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:31.005: INFO: Found 0 / 1
Feb  7 11:54:31.975: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:31.975: INFO: Found 0 / 1
Feb  7 11:54:33.367: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:33.367: INFO: Found 0 / 1
Feb  7 11:54:33.979: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:33.979: INFO: Found 0 / 1
Feb  7 11:54:34.976: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:34.976: INFO: Found 0 / 1
Feb  7 11:54:35.976: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:35.976: INFO: Found 0 / 1
Feb  7 11:54:36.974: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:36.974: INFO: Found 1 / 1
Feb  7 11:54:36.974: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  7 11:54:36.977: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 11:54:36.977: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  7 11:54:36.977: INFO: wait on redis-master startup in e2e-tests-kubectl-625hr 
Feb  7 11:54:36.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9bv4f redis-master --namespace=e2e-tests-kubectl-625hr'
Feb  7 11:54:37.139: INFO: stderr: ""
Feb  7 11:54:37.139: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Feb 11:54:35.426 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Feb 11:54:35.427 # Server started, Redis version 3.2.12\n1:M 07 Feb 11:54:35.427 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Feb 11:54:35.428 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  7 11:54:37.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-625hr'
Feb  7 11:54:37.387: INFO: stderr: ""
Feb  7 11:54:37.387: INFO: stdout: "service/rm2 exposed\n"
Feb  7 11:54:37.408: INFO: Service rm2 in namespace e2e-tests-kubectl-625hr found.
STEP: exposing service
Feb  7 11:54:39.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-625hr'
Feb  7 11:54:39.684: INFO: stderr: ""
Feb  7 11:54:39.684: INFO: stdout: "service/rm3 exposed\n"
Feb  7 11:54:39.775: INFO: Service rm3 in namespace e2e-tests-kubectl-625hr found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:54:41.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-625hr" for this suite.
Feb  7 11:55:06.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:55:06.181: INFO: namespace: e2e-tests-kubectl-625hr, resource: bindings, ignored listing per whitelist
Feb  7 11:55:06.295: INFO: namespace e2e-tests-kubectl-625hr deletion completed in 24.44590124s

• [SLOW TEST:38.953 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:55:06.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 11:55:06.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-lht9w'
Feb  7 11:55:06.689: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 11:55:06.689: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb  7 11:55:10.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lht9w'
Feb  7 11:55:11.006: INFO: stderr: ""
Feb  7 11:55:11.006: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:55:11.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lht9w" for this suite.
Feb  7 11:55:17.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:55:17.222: INFO: namespace: e2e-tests-kubectl-lht9w, resource: bindings, ignored listing per whitelist
Feb  7 11:55:17.292: INFO: namespace e2e-tests-kubectl-lht9w deletion completed in 6.272877825s

• [SLOW TEST:10.996 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:55:17.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 11:55:17.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-wvf5t" to be "success or failure"
Feb  7 11:55:17.619: INFO: Pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 162.364491ms
Feb  7 11:55:19.640: INFO: Pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182537882s
Feb  7 11:55:21.657: INFO: Pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199964355s
Feb  7 11:55:23.677: INFO: Pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219964435s
Feb  7 11:55:25.838: INFO: Pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381131355s
Feb  7 11:55:28.027: INFO: Pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569885799s
STEP: Saw pod success
Feb  7 11:55:28.027: INFO: Pod "downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:55:28.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 11:55:28.627: INFO: Waiting for pod downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005 to disappear
Feb  7 11:55:28.634: INFO: Pod downwardapi-volume-b55a9ee3-49a0-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:55:28.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wvf5t" for this suite.
Feb  7 11:55:34.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:55:34.722: INFO: namespace: e2e-tests-projected-wvf5t, resource: bindings, ignored listing per whitelist
Feb  7 11:55:34.957: INFO: namespace e2e-tests-projected-wvf5t deletion completed in 6.30836935s

• [SLOW TEST:17.664 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:55:34.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 11:55:35.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:55:43.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-njg4b" for this suite.
Feb  7 11:56:37.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:56:38.005: INFO: namespace: e2e-tests-pods-njg4b, resource: bindings, ignored listing per whitelist
Feb  7 11:56:38.054: INFO: namespace e2e-tests-pods-njg4b deletion completed in 54.234679749s

• [SLOW TEST:63.096 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:56:38.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-e5822cf5-49a0-11ea-abae-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e5822cf5-49a0-11ea-abae-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:58:17.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tb62g" for this suite.
Feb  7 11:58:42.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:58:42.248: INFO: namespace: e2e-tests-configmap-tb62g, resource: bindings, ignored listing per whitelist
Feb  7 11:58:42.270: INFO: namespace e2e-tests-configmap-tb62g deletion completed in 24.279104991s

• [SLOW TEST:124.216 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:58:42.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  7 11:58:42.586: INFO: Waiting up to 5m0s for pod "downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-b4l6f" to be "success or failure"
Feb  7 11:58:42.593: INFO: Pod "downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.403527ms
Feb  7 11:58:44.621: INFO: Pod "downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034997517s
Feb  7 11:58:47.215: INFO: Pod "downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.629379005s
Feb  7 11:58:49.268: INFO: Pod "downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.682250238s
Feb  7 11:58:51.304: INFO: Pod "downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.717946417s
STEP: Saw pod success
Feb  7 11:58:51.304: INFO: Pod "downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 11:58:51.325: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  7 11:58:51.500: INFO: Waiting for pod downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005 to disappear
Feb  7 11:58:51.516: INFO: Pod downward-api-2f9d68ac-49a1-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:58:51.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b4l6f" for this suite.
Feb  7 11:58:57.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 11:58:57.805: INFO: namespace: e2e-tests-downward-api-b4l6f, resource: bindings, ignored listing per whitelist
Feb  7 11:58:57.914: INFO: namespace e2e-tests-downward-api-b4l6f deletion completed in 6.377584118s

• [SLOW TEST:15.644 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 11:58:57.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-p8g72
Feb  7 11:59:08.146: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-p8g72
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 11:59:08.153: INFO: Initial restart count of pod liveness-exec is 0
Feb  7 11:59:58.715: INFO: Restart count of pod e2e-tests-container-probe-p8g72/liveness-exec is now 1 (50.561957076s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 11:59:58.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-p8g72" for this suite.
Feb  7 12:00:06.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:00:07.073: INFO: namespace: e2e-tests-container-probe-p8g72, resource: bindings, ignored listing per whitelist
Feb  7 12:00:07.085: INFO: namespace e2e-tests-container-probe-p8g72 deletion completed in 8.30090823s

• [SLOW TEST:69.170 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:00:07.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-6217c692-49a1-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 12:00:07.268: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-krq89" to be "success or failure"
Feb  7 12:00:07.294: INFO: Pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.871968ms
Feb  7 12:00:09.365: INFO: Pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097558665s
Feb  7 12:00:11.382: INFO: Pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11375807s
Feb  7 12:00:13.394: INFO: Pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125811209s
Feb  7 12:00:15.403: INFO: Pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135383457s
Feb  7 12:00:17.426: INFO: Pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158108943s
STEP: Saw pod success
Feb  7 12:00:17.426: INFO: Pod "pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:00:17.435: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 12:00:17.773: INFO: Waiting for pod pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005 to disappear
Feb  7 12:00:17.790: INFO: Pod pod-projected-configmaps-6218b0e6-49a1-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:00:17.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-krq89" for this suite.
Feb  7 12:00:23.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:00:24.022: INFO: namespace: e2e-tests-projected-krq89, resource: bindings, ignored listing per whitelist
Feb  7 12:00:24.161: INFO: namespace e2e-tests-projected-krq89 deletion completed in 6.357237639s

• [SLOW TEST:17.076 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:00:24.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6c3fb44b-49a1-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:00:24.339: INFO: Waiting up to 5m0s for pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-9zq5z" to be "success or failure"
Feb  7 12:00:24.351: INFO: Pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.348099ms
Feb  7 12:00:26.365: INFO: Pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026278032s
Feb  7 12:00:28.389: INFO: Pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049379126s
Feb  7 12:00:30.399: INFO: Pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059713473s
Feb  7 12:00:32.638: INFO: Pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299104646s
Feb  7 12:00:34.664: INFO: Pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.32528797s
STEP: Saw pod success
Feb  7 12:00:34.665: INFO: Pod "pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:00:34.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  7 12:00:34.764: INFO: Waiting for pod pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005 to disappear
Feb  7 12:00:34.783: INFO: Pod pod-secrets-6c4069b2-49a1-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:00:34.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9zq5z" for this suite.
Feb  7 12:00:40.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:00:41.100: INFO: namespace: e2e-tests-secrets-9zq5z, resource: bindings, ignored listing per whitelist
Feb  7 12:00:41.107: INFO: namespace e2e-tests-secrets-9zq5z deletion completed in 6.26250456s

• [SLOW TEST:16.945 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:00:41.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-7666c6dc-49a1-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:00:41.351: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-nxx6p" to be "success or failure"
Feb  7 12:00:41.392: INFO: Pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.274239ms
Feb  7 12:00:43.406: INFO: Pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054339888s
Feb  7 12:00:45.414: INFO: Pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062865255s
Feb  7 12:00:47.757: INFO: Pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405777279s
Feb  7 12:00:49.773: INFO: Pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.421891013s
Feb  7 12:00:51.820: INFO: Pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.469235708s
STEP: Saw pod success
Feb  7 12:00:51.821: INFO: Pod "pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:00:51.851: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 12:00:52.693: INFO: Waiting for pod pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005 to disappear
Feb  7 12:00:52.742: INFO: Pod pod-projected-secrets-76681bbe-49a1-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:00:52.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nxx6p" for this suite.
Feb  7 12:00:59.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:00:59.310: INFO: namespace: e2e-tests-projected-nxx6p, resource: bindings, ignored listing per whitelist
Feb  7 12:00:59.356: INFO: namespace e2e-tests-projected-nxx6p deletion completed in 6.583341942s

• [SLOW TEST:18.249 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:00:59.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 12:00:59.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-2jvnq'
Feb  7 12:00:59.730: INFO: stderr: ""
Feb  7 12:00:59.730: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  7 12:01:09.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-2jvnq -o json'
Feb  7 12:01:09.962: INFO: stderr: ""
Feb  7 12:01:09.962: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-07T12:00:59Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-2jvnq\",\n        \"resourceVersion\": \"20860273\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-2jvnq/pods/e2e-test-nginx-pod\",\n        \"uid\": \"815b6c58-49a1-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-xp2z7\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-xp2z7\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-xp2z7\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T12:00:59Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T12:01:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T12:01:08Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-07T12:00:59Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://d36b6e2ab293ec82c3d258dbd88487dbeec34e067fb46040cb2ece3cd468fae0\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-07T12:01:07Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-07T12:00:59Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  7 12:01:09.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-2jvnq'
Feb  7 12:01:10.320: INFO: stderr: ""
Feb  7 12:01:10.320: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb  7 12:01:10.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-2jvnq'
Feb  7 12:01:19.290: INFO: stderr: ""
Feb  7 12:01:19.290: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:01:19.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2jvnq" for this suite.
Feb  7 12:01:25.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:01:25.531: INFO: namespace: e2e-tests-kubectl-2jvnq, resource: bindings, ignored listing per whitelist
Feb  7 12:01:25.561: INFO: namespace e2e-tests-kubectl-2jvnq deletion completed in 6.241839087s

• [SLOW TEST:26.205 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:01:25.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  7 12:01:25.873: INFO: PodSpec: initContainers in spec.initContainers
Feb  7 12:02:42.962: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-90f4ab87-49a1-11ea-abae-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-ztf4r", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-ztf4r/pods/pod-init-90f4ab87-49a1-11ea-abae-0242ac110005", UID:"90f70a03-49a1-11ea-a994-fa163e34d433", ResourceVersion:"20860435", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716673685, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"873779139", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9p89h", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001f6e000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9p89h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9p89h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9p89h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016da088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00255a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0016da100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0016da120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0016da128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0016da12c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716673686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716673686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716673686, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716673685, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0025dc040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001074a10)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001074a80)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://6d3ef5650e5eb65c99fa15cff2b21bad0dcf6e27173be101c091a2ae0c5b5c42"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025dc080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025dc060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:02:42.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-ztf4r" for this suite.
Feb  7 12:03:07.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:03:07.379: INFO: namespace: e2e-tests-init-container-ztf4r, resource: bindings, ignored listing per whitelist
Feb  7 12:03:07.433: INFO: namespace e2e-tests-init-container-ztf4r deletion completed in 24.326731593s

• [SLOW TEST:101.871 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:03:07.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 12:03:07.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-p8rhm'
Feb  7 12:03:09.223: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 12:03:09.223: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb  7 12:03:09.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-p8rhm'
Feb  7 12:03:09.543: INFO: stderr: ""
Feb  7 12:03:09.543: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:03:09.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p8rhm" for this suite.
Feb  7 12:03:17.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:03:17.092: INFO: namespace: e2e-tests-kubectl-p8rhm, resource: bindings, ignored listing per whitelist
Feb  7 12:03:17.227: INFO: namespace e2e-tests-kubectl-p8rhm deletion completed in 7.671876873s

• [SLOW TEST:9.794 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:03:17.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  7 12:03:17.626: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-p4zf6,SelfLink:/api/v1/namespaces/e2e-tests-watch-p4zf6/configmaps/e2e-watch-test-resource-version,UID:d36f1354-49a1-11ea-a994-fa163e34d433,ResourceVersion:20860523,Generation:0,CreationTimestamp:2020-02-07 12:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 12:03:17.626: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-p4zf6,SelfLink:/api/v1/namespaces/e2e-tests-watch-p4zf6/configmaps/e2e-watch-test-resource-version,UID:d36f1354-49a1-11ea-a994-fa163e34d433,ResourceVersion:20860524,Generation:0,CreationTimestamp:2020-02-07 12:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:03:17.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-p4zf6" for this suite.
Feb  7 12:03:23.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:03:23.839: INFO: namespace: e2e-tests-watch-p4zf6, resource: bindings, ignored listing per whitelist
Feb  7 12:03:24.029: INFO: namespace e2e-tests-watch-p4zf6 deletion completed in 6.381207184s

• [SLOW TEST:6.802 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:03:24.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb  7 12:03:24.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-dx25l run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  7 12:03:33.413: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0207 12:03:31.995793    1492 log.go:172] (0xc0001626e0) (0xc0005d5860) Create stream\nI0207 12:03:31.995934    1492 log.go:172] (0xc0001626e0) (0xc0005d5860) Stream added, broadcasting: 1\nI0207 12:03:31.999367    1492 log.go:172] (0xc0001626e0) Reply frame received for 1\nI0207 12:03:31.999434    1492 log.go:172] (0xc0001626e0) (0xc0008a0000) Create stream\nI0207 12:03:31.999443    1492 log.go:172] (0xc0001626e0) (0xc0008a0000) Stream added, broadcasting: 3\nI0207 12:03:32.000381    1492 log.go:172] (0xc0001626e0) Reply frame received for 3\nI0207 12:03:32.000404    1492 log.go:172] (0xc0001626e0) (0xc0005d2320) Create stream\nI0207 12:03:32.000412    1492 log.go:172] (0xc0001626e0) (0xc0005d2320) Stream added, broadcasting: 5\nI0207 12:03:32.001217    1492 log.go:172] (0xc0001626e0) Reply frame received for 5\nI0207 12:03:32.001237    1492 log.go:172] (0xc0001626e0) (0xc0005d5900) Create stream\nI0207 12:03:32.001243    1492 log.go:172] (0xc0001626e0) (0xc0005d5900) Stream added, broadcasting: 7\nI0207 12:03:32.002541    1492 log.go:172] (0xc0001626e0) Reply frame received for 7\nI0207 12:03:32.002721    1492 log.go:172] (0xc0008a0000) (3) Writing data frame\nI0207 12:03:32.002831    1492 log.go:172] (0xc0008a0000) (3) Writing data frame\nI0207 12:03:32.009749    1492 log.go:172] (0xc0001626e0) Data frame received for 5\nI0207 12:03:32.009760    1492 log.go:172] (0xc0005d2320) (5) Data frame handling\nI0207 12:03:32.009773    1492 log.go:172] (0xc0005d2320) (5) Data frame sent\nI0207 12:03:32.013668    1492 log.go:172] (0xc0001626e0) Data frame received for 5\nI0207 12:03:32.013683    1492 log.go:172] (0xc0005d2320) (5) Data frame handling\nI0207 12:03:32.013693    1492 log.go:172] (0xc0005d2320) (5) Data frame sent\nI0207 12:03:33.343899    1492 log.go:172] (0xc0001626e0) (0xc0005d5900) Stream removed, broadcasting: 7\nI0207 12:03:33.344046    1492 log.go:172] (0xc0001626e0) Data frame received for 1\nI0207 12:03:33.344098    1492 log.go:172] (0xc0001626e0) (0xc0008a0000) Stream removed, broadcasting: 3\nI0207 12:03:33.344149    1492 log.go:172] (0xc0005d5860) (1) Data frame handling\nI0207 12:03:33.344169    1492 log.go:172] (0xc0005d5860) (1) Data frame sent\nI0207 12:03:33.344199    1492 log.go:172] (0xc0001626e0) (0xc0005d2320) Stream removed, broadcasting: 5\nI0207 12:03:33.344229    1492 log.go:172] (0xc0001626e0) (0xc0005d5860) Stream removed, broadcasting: 1\nI0207 12:03:33.344255    1492 log.go:172] (0xc0001626e0) Go away received\nI0207 12:03:33.344459    1492 log.go:172] (0xc0001626e0) (0xc0005d5860) Stream removed, broadcasting: 1\nI0207 12:03:33.344481    1492 log.go:172] (0xc0001626e0) (0xc0008a0000) Stream removed, broadcasting: 3\nI0207 12:03:33.344490    1492 log.go:172] (0xc0001626e0) (0xc0005d2320) Stream removed, broadcasting: 5\nI0207 12:03:33.344498    1492 log.go:172] (0xc0001626e0) (0xc0005d5900) Stream removed, broadcasting: 7\n"
Feb  7 12:03:33.413: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:03:35.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dx25l" for this suite.
Feb  7 12:03:42.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:03:42.266: INFO: namespace: e2e-tests-kubectl-dx25l, resource: bindings, ignored listing per whitelist
Feb  7 12:03:42.324: INFO: namespace e2e-tests-kubectl-dx25l deletion completed in 6.860661578s

• [SLOW TEST:18.294 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:03:42.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:03:42.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-889g4" for this suite.
Feb  7 12:03:49.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:03:49.539: INFO: namespace: e2e-tests-services-889g4, resource: bindings, ignored listing per whitelist
Feb  7 12:03:49.656: INFO: namespace e2e-tests-services-889g4 deletion completed in 6.820621858s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:7.331 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:03:49.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-kfqxm
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-kfqxm
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-kfqxm
Feb  7 12:03:50.031: INFO: Found 0 stateful pods, waiting for 1
Feb  7 12:04:00.063: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb  7 12:04:10.060: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  7 12:04:10.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 12:04:10.685: INFO: stderr: "I0207 12:04:10.275690    1518 log.go:172] (0xc00015c000) (0xc0002fad20) Create stream\nI0207 12:04:10.275784    1518 log.go:172] (0xc00015c000) (0xc0002fad20) Stream added, broadcasting: 1\nI0207 12:04:10.281243    1518 log.go:172] (0xc00015c000) Reply frame received for 1\nI0207 12:04:10.281269    1518 log.go:172] (0xc00015c000) (0xc0002fae60) Create stream\nI0207 12:04:10.281275    1518 log.go:172] (0xc00015c000) (0xc0002fae60) Stream added, broadcasting: 3\nI0207 12:04:10.282428    1518 log.go:172] (0xc00015c000) Reply frame received for 3\nI0207 12:04:10.282469    1518 log.go:172] (0xc00015c000) (0xc0007b8000) Create stream\nI0207 12:04:10.282477    1518 log.go:172] (0xc00015c000) (0xc0007b8000) Stream added, broadcasting: 5\nI0207 12:04:10.283735    1518 log.go:172] (0xc00015c000) Reply frame received for 5\nI0207 12:04:10.428760    1518 log.go:172] (0xc00015c000) Data frame received for 3\nI0207 12:04:10.428819    1518 log.go:172] (0xc0002fae60) (3) Data frame handling\nI0207 12:04:10.428851    1518 log.go:172] (0xc0002fae60) (3) Data frame sent\nI0207 12:04:10.676358    1518 log.go:172] (0xc00015c000) Data frame received for 1\nI0207 12:04:10.676450    1518 log.go:172] (0xc0002fad20) (1) Data frame handling\nI0207 12:04:10.676486    1518 log.go:172] (0xc0002fad20) (1) Data frame sent\nI0207 12:04:10.676723    1518 log.go:172] (0xc00015c000) (0xc0002fad20) Stream removed, broadcasting: 1\nI0207 12:04:10.677406    1518 log.go:172] (0xc00015c000) (0xc0002fae60) Stream removed, broadcasting: 3\nI0207 12:04:10.678192    1518 log.go:172] (0xc00015c000) (0xc0007b8000) Stream removed, broadcasting: 5\nI0207 12:04:10.678230    1518 log.go:172] (0xc00015c000) (0xc0002fad20) Stream removed, broadcasting: 1\nI0207 12:04:10.678243    1518 log.go:172] (0xc00015c000) (0xc0002fae60) Stream removed, broadcasting: 3\nI0207 12:04:10.678258    1518 log.go:172] (0xc00015c000) (0xc0007b8000) Stream removed, broadcasting: 5\n"
Feb  7 12:04:10.686: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 12:04:10.686: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 12:04:10.709: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  7 12:04:20.748: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 12:04:20.748: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 12:04:20.803: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999611s
Feb  7 12:04:21.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.97306144s
Feb  7 12:04:22.874: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.946344265s
Feb  7 12:04:23.895: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.902172628s
Feb  7 12:04:24.923: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.881445243s
Feb  7 12:04:25.955: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.853013559s
Feb  7 12:04:26.971: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.82105097s
Feb  7 12:04:27.986: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.805008586s
Feb  7 12:04:29.702: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.790303628s
Feb  7 12:04:30.715: INFO: Verifying statefulset ss doesn't scale past 1 for another 74.164824ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-kfqxm
Feb  7 12:04:31.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 12:04:32.429: INFO: stderr: "I0207 12:04:31.985847    1541 log.go:172] (0xc0006ee370) (0xc000710640) Create stream\nI0207 12:04:31.986041    1541 log.go:172] (0xc0006ee370) (0xc000710640) Stream added, broadcasting: 1\nI0207 12:04:31.991970    1541 log.go:172] (0xc0006ee370) Reply frame received for 1\nI0207 12:04:31.991996    1541 log.go:172] (0xc0006ee370) (0xc0007106e0) Create stream\nI0207 12:04:31.992004    1541 log.go:172] (0xc0006ee370) (0xc0007106e0) Stream added, broadcasting: 3\nI0207 12:04:31.992925    1541 log.go:172] (0xc0006ee370) Reply frame received for 3\nI0207 12:04:31.992945    1541 log.go:172] (0xc0006ee370) (0xc000710780) Create stream\nI0207 12:04:31.992950    1541 log.go:172] (0xc0006ee370) (0xc000710780) Stream added, broadcasting: 5\nI0207 12:04:31.993812    1541 log.go:172] (0xc0006ee370) Reply frame received for 5\nI0207 12:04:32.145326    1541 log.go:172] (0xc0006ee370) Data frame received for 3\nI0207 12:04:32.145492    1541 log.go:172] (0xc0007106e0) (3) Data frame handling\nI0207 12:04:32.145540    1541 log.go:172] (0xc0007106e0) (3) Data frame sent\nI0207 12:04:32.421753    1541 log.go:172] (0xc0006ee370) (0xc0007106e0) Stream removed, broadcasting: 3\nI0207 12:04:32.421951    1541 log.go:172] (0xc0006ee370) Data frame received for 1\nI0207 12:04:32.421977    1541 log.go:172] (0xc000710640) (1) Data frame handling\nI0207 12:04:32.421992    1541 log.go:172] (0xc000710640) (1) Data frame sent\nI0207 12:04:32.422026    1541 log.go:172] (0xc0006ee370) (0xc000710640) Stream removed, broadcasting: 1\nI0207 12:04:32.422061    1541 log.go:172] (0xc0006ee370) (0xc000710780) Stream removed, broadcasting: 5\nI0207 12:04:32.422078    1541 log.go:172] (0xc0006ee370) Go away received\nI0207 12:04:32.422341    1541 log.go:172] (0xc0006ee370) (0xc000710640) Stream removed, broadcasting: 1\nI0207 12:04:32.422359    1541 log.go:172] (0xc0006ee370) (0xc0007106e0) Stream removed, broadcasting: 3\nI0207 12:04:32.422371    1541 log.go:172] (0xc0006ee370) (0xc000710780) Stream removed, broadcasting: 5\n"
Feb  7 12:04:32.429: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 12:04:32.429: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 12:04:32.453: INFO: Found 1 stateful pods, waiting for 3
Feb  7 12:04:42.553: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:04:42.553: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:04:42.553: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 12:04:52.471: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:04:52.471: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:04:52.471: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  7 12:04:52.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 12:04:53.263: INFO: stderr: "I0207 12:04:52.882505    1562 log.go:172] (0xc000138840) (0xc0005cd400) Create stream\nI0207 12:04:52.882701    1562 log.go:172] (0xc000138840) (0xc0005cd400) Stream added, broadcasting: 1\nI0207 12:04:52.896114    1562 log.go:172] (0xc000138840) Reply frame received for 1\nI0207 12:04:52.896174    1562 log.go:172] (0xc000138840) (0xc000670000) Create stream\nI0207 12:04:52.896191    1562 log.go:172] (0xc000138840) (0xc000670000) Stream added, broadcasting: 3\nI0207 12:04:52.899444    1562 log.go:172] (0xc000138840) Reply frame received for 3\nI0207 12:04:52.899552    1562 log.go:172] (0xc000138840) (0xc00057c000) Create stream\nI0207 12:04:52.899592    1562 log.go:172] (0xc000138840) (0xc00057c000) Stream added, broadcasting: 5\nI0207 12:04:52.904398    1562 log.go:172] (0xc000138840) Reply frame received for 5\nI0207 12:04:53.095274    1562 log.go:172] (0xc000138840) Data frame received for 3\nI0207 12:04:53.095387    1562 log.go:172] (0xc000670000) (3) Data frame handling\nI0207 12:04:53.095421    1562 log.go:172] (0xc000670000) (3) Data frame sent\nI0207 12:04:53.256101    1562 log.go:172] (0xc000138840) Data frame received for 1\nI0207 12:04:53.256207    1562 log.go:172] (0xc000138840) (0xc000670000) Stream removed, broadcasting: 3\nI0207 12:04:53.256427    1562 log.go:172] (0xc0005cd400) (1) Data frame handling\nI0207 12:04:53.256506    1562 log.go:172] (0xc0005cd400) (1) Data frame sent\nI0207 12:04:53.256540    1562 log.go:172] (0xc000138840) (0xc00057c000) Stream removed, broadcasting: 5\nI0207 12:04:53.256597    1562 log.go:172] (0xc000138840) (0xc0005cd400) Stream removed, broadcasting: 1\nI0207 12:04:53.256661    1562 log.go:172] (0xc000138840) Go away received\nI0207 12:04:53.256950    1562 log.go:172] (0xc000138840) (0xc0005cd400) Stream removed, broadcasting: 1\nI0207 12:04:53.256969    1562 log.go:172] (0xc000138840) (0xc000670000) Stream removed, broadcasting: 3\nI0207 12:04:53.256987    1562 log.go:172] (0xc000138840) (0xc00057c000) Stream removed, broadcasting: 5\n"
Feb  7 12:04:53.263: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 12:04:53.263: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 12:04:53.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 12:04:54.039: INFO: stderr: "I0207 12:04:53.535941    1584 log.go:172] (0xc0006ba370) (0xc00064f4a0) Create stream\nI0207 12:04:53.536160    1584 log.go:172] (0xc0006ba370) (0xc00064f4a0) Stream added, broadcasting: 1\nI0207 12:04:53.540063    1584 log.go:172] (0xc0006ba370) Reply frame received for 1\nI0207 12:04:53.540089    1584 log.go:172] (0xc0006ba370) (0xc0005ca000) Create stream\nI0207 12:04:53.540099    1584 log.go:172] (0xc0006ba370) (0xc0005ca000) Stream added, broadcasting: 3\nI0207 12:04:53.541550    1584 log.go:172] (0xc0006ba370) Reply frame received for 3\nI0207 12:04:53.541580    1584 log.go:172] (0xc0006ba370) (0xc000764000) Create stream\nI0207 12:04:53.541596    1584 log.go:172] (0xc0006ba370) (0xc000764000) Stream added, broadcasting: 5\nI0207 12:04:53.543076    1584 log.go:172] (0xc0006ba370) Reply frame received for 5\nI0207 12:04:53.711625    1584 log.go:172] (0xc0006ba370) Data frame received for 3\nI0207 12:04:53.711723    1584 log.go:172] (0xc0005ca000) (3) Data frame handling\nI0207 12:04:53.711745    1584 log.go:172] (0xc0005ca000) (3) Data frame sent\nI0207 12:04:54.030188    1584 log.go:172] (0xc0006ba370) Data frame received for 1\nI0207 12:04:54.030368    1584 log.go:172] (0xc00064f4a0) (1) Data frame handling\nI0207 12:04:54.030421    1584 log.go:172] (0xc00064f4a0) (1) Data frame sent\nI0207 12:04:54.030441    1584 log.go:172] (0xc0006ba370) (0xc00064f4a0) Stream removed, broadcasting: 1\nI0207 12:04:54.031091    1584 log.go:172] (0xc0006ba370) (0xc0005ca000) Stream removed, broadcasting: 3\nI0207 12:04:54.031190    1584 log.go:172] (0xc0006ba370) (0xc000764000) Stream removed, broadcasting: 5\nI0207 12:04:54.031227    1584 log.go:172] (0xc0006ba370) Go away received\nI0207 12:04:54.031598    1584 log.go:172] (0xc0006ba370) (0xc00064f4a0) Stream removed, broadcasting: 1\nI0207 12:04:54.031641    1584 log.go:172] (0xc0006ba370) (0xc0005ca000) Stream removed, broadcasting: 3\nI0207 12:04:54.031655    1584 log.go:172] (0xc0006ba370) (0xc000764000) Stream removed, broadcasting: 5\n"
Feb  7 12:04:54.039: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 12:04:54.039: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 12:04:54.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 12:04:54.689: INFO: stderr: "I0207 12:04:54.337864    1606 log.go:172] (0xc000138630) (0xc000726640) Create stream\nI0207 12:04:54.338191    1606 log.go:172] (0xc000138630) (0xc000726640) Stream added, broadcasting: 1\nI0207 12:04:54.344099    1606 log.go:172] (0xc000138630) Reply frame received for 1\nI0207 12:04:54.344140    1606 log.go:172] (0xc000138630) (0xc0007266e0) Create stream\nI0207 12:04:54.344148    1606 log.go:172] (0xc000138630) (0xc0007266e0) Stream added, broadcasting: 3\nI0207 12:04:54.345200    1606 log.go:172] (0xc000138630) Reply frame received for 3\nI0207 12:04:54.345237    1606 log.go:172] (0xc000138630) (0xc000664c80) Create stream\nI0207 12:04:54.345253    1606 log.go:172] (0xc000138630) (0xc000664c80) Stream added, broadcasting: 5\nI0207 12:04:54.354952    1606 log.go:172] (0xc000138630) Reply frame received for 5\nI0207 12:04:54.531716    1606 log.go:172] (0xc000138630) Data frame received for 3\nI0207 12:04:54.532067    1606 log.go:172] (0xc0007266e0) (3) Data frame handling\nI0207 12:04:54.532111    1606 log.go:172] (0xc0007266e0) (3) Data frame sent\nI0207 12:04:54.684076    1606 log.go:172] (0xc000138630) Data frame received for 1\nI0207 12:04:54.684147    1606 log.go:172] (0xc000726640) (1) Data frame handling\nI0207 12:04:54.684172    1606 log.go:172] (0xc000726640) (1) Data frame sent\nI0207 12:04:54.684221    1606 log.go:172] (0xc000138630) (0xc000726640) Stream removed, broadcasting: 1\nI0207 12:04:54.684325    1606 log.go:172] (0xc000138630) (0xc0007266e0) Stream removed, broadcasting: 3\nI0207 12:04:54.684365    1606 log.go:172] (0xc000138630) (0xc000664c80) Stream removed, broadcasting: 5\nI0207 12:04:54.684406    1606 log.go:172] (0xc000138630) Go away received\nI0207 12:04:54.684497    1606 log.go:172] (0xc000138630) (0xc000726640) Stream removed, broadcasting: 1\nI0207 12:04:54.684513    1606 log.go:172] (0xc000138630) (0xc0007266e0) Stream removed, broadcasting: 3\nI0207 12:04:54.684524    1606 log.go:172] (0xc000138630) (0xc000664c80) Stream removed, broadcasting: 5\n"
Feb  7 12:04:54.689: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 12:04:54.689: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 12:04:54.689: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 12:04:55.009: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 12:04:55.009: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 12:04:55.009: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 12:04:55.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999394s
Feb  7 12:04:56.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974787963s
Feb  7 12:04:58.047: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.960349434s
Feb  7 12:04:59.067: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.027760702s
Feb  7 12:05:00.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.007374881s
Feb  7 12:05:01.191: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.899233562s
Feb  7 12:05:02.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.883271039s
Feb  7 12:05:03.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.810941492s
Feb  7 12:05:04.295: INFO: Verifying statefulset ss doesn't scale past 3 for another 791.420662ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-kfqxm
Feb  7 12:05:05.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 12:05:06.062: INFO: stderr: "I0207 12:05:05.565198    1628 log.go:172] (0xc000742370) (0xc000661220) Create stream\nI0207 12:05:05.565615    1628 log.go:172] (0xc000742370) (0xc000661220) Stream added, broadcasting: 1\nI0207 12:05:05.586924    1628 log.go:172] (0xc000742370) Reply frame received for 1\nI0207 12:05:05.587054    1628 log.go:172] (0xc000742370) (0xc000562000) Create stream\nI0207 12:05:05.587080    1628 log.go:172] (0xc000742370) (0xc000562000) Stream added, broadcasting: 3\nI0207 12:05:05.589448    1628 log.go:172] (0xc000742370) Reply frame received for 3\nI0207 12:05:05.589551    1628 log.go:172] (0xc000742370) (0xc000574000) Create stream\nI0207 12:05:05.589581    1628 log.go:172] (0xc000742370) (0xc000574000) Stream added, broadcasting: 5\nI0207 12:05:05.592888    1628 log.go:172] (0xc000742370) Reply frame received for 5\nI0207 12:05:05.883841    1628 log.go:172] (0xc000742370) Data frame received for 3\nI0207 12:05:05.883945    1628 log.go:172] (0xc000562000) (3) Data frame handling\nI0207 12:05:05.883975    1628 log.go:172] (0xc000562000) (3) Data frame sent\nI0207 12:05:06.054964    1628 log.go:172] (0xc000742370) (0xc000574000) Stream removed, broadcasting: 5\nI0207 12:05:06.055167    1628 log.go:172] (0xc000742370) Data frame received for 1\nI0207 12:05:06.055206    1628 log.go:172] (0xc000742370) (0xc000562000) Stream removed, broadcasting: 3\nI0207 12:05:06.055254    1628 log.go:172] (0xc000661220) (1) Data frame handling\nI0207 12:05:06.055293    1628 log.go:172] (0xc000661220) (1) Data frame sent\nI0207 12:05:06.055305    1628 log.go:172] (0xc000742370) (0xc000661220) Stream removed, broadcasting: 1\nI0207 12:05:06.055323    1628 log.go:172] (0xc000742370) Go away received\nI0207 12:05:06.055554    1628 log.go:172] (0xc000742370) (0xc000661220) Stream removed, broadcasting: 1\nI0207 12:05:06.055569    1628 log.go:172] (0xc000742370) (0xc000562000) Stream removed, broadcasting: 3\nI0207 12:05:06.055574    1628 log.go:172] (0xc000742370) (0xc000574000) Stream removed, broadcasting: 5\n"
Feb  7 12:05:06.062: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 12:05:06.063: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 12:05:06.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 12:05:06.673: INFO: stderr: "I0207 12:05:06.214723    1651 log.go:172] (0xc00072a370) (0xc00078a640) Create stream\nI0207 12:05:06.214889    1651 log.go:172] (0xc00072a370) (0xc00078a640) Stream added, broadcasting: 1\nI0207 12:05:06.219959    1651 log.go:172] (0xc00072a370) Reply frame received for 1\nI0207 12:05:06.219984    1651 log.go:172] (0xc00072a370) (0xc00078a6e0) Create stream\nI0207 12:05:06.219989    1651 log.go:172] (0xc00072a370) (0xc00078a6e0) Stream added, broadcasting: 3\nI0207 12:05:06.220855    1651 log.go:172] (0xc00072a370) Reply frame received for 3\nI0207 12:05:06.220876    1651 log.go:172] (0xc00072a370) (0xc000636be0) Create stream\nI0207 12:05:06.220884    1651 log.go:172] (0xc00072a370) (0xc000636be0) Stream added, broadcasting: 5\nI0207 12:05:06.221638    1651 log.go:172] (0xc00072a370) Reply frame received for 5\nI0207 12:05:06.391355    1651 log.go:172] (0xc00072a370) Data frame received for 3\nI0207 12:05:06.391415    1651 log.go:172] (0xc00078a6e0) (3) Data frame handling\nI0207 12:05:06.391433    1651 log.go:172] (0xc00078a6e0) (3) Data frame sent\nI0207 12:05:06.664679    1651 log.go:172] (0xc00072a370) Data frame received for 1\nI0207 12:05:06.664749    1651 log.go:172] (0xc00078a640) (1) Data frame handling\nI0207 12:05:06.664771    1651 log.go:172] (0xc00078a640) (1) Data frame sent\nI0207 12:05:06.664793    1651 log.go:172] (0xc00072a370) (0xc00078a640) Stream removed, broadcasting: 1\nI0207 12:05:06.664983    1651 log.go:172] (0xc00072a370) (0xc00078a6e0) Stream removed, broadcasting: 3\nI0207 12:05:06.665042    1651 log.go:172] (0xc00072a370) (0xc000636be0) Stream removed, broadcasting: 5\nI0207 12:05:06.665069    1651 log.go:172] (0xc00072a370) (0xc00078a640) Stream removed, broadcasting: 1\nI0207 12:05:06.665083    1651 log.go:172] (0xc00072a370) (0xc00078a6e0) Stream removed, broadcasting: 3\nI0207 12:05:06.665097    1651 log.go:172] (0xc00072a370) (0xc000636be0) Stream removed, broadcasting: 5\nI0207 12:05:06.665666    1651 log.go:172] (0xc00072a370) Go away received\n"
Feb  7 12:05:06.673: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 12:05:06.673: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 12:05:06.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kfqxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 12:05:07.210: INFO: stderr: "I0207 12:05:06.903085    1672 log.go:172] (0xc0006bc370) (0xc0006da640) Create stream\nI0207 12:05:06.903192    1672 log.go:172] (0xc0006bc370) (0xc0006da640) Stream added, broadcasting: 1\nI0207 12:05:06.908367    1672 log.go:172] (0xc0006bc370) Reply frame received for 1\nI0207 12:05:06.908410    1672 log.go:172] (0xc0006bc370) (0xc000590fa0) Create stream\nI0207 12:05:06.908426    1672 log.go:172] (0xc0006bc370) (0xc000590fa0) Stream added, broadcasting: 3\nI0207 12:05:06.909781    1672 log.go:172] (0xc0006bc370) Reply frame received for 3\nI0207 12:05:06.909815    1672 log.go:172] (0xc0006bc370) (0xc0006da6e0) Create stream\nI0207 12:05:06.909825    1672 log.go:172] (0xc0006bc370) (0xc0006da6e0) Stream added, broadcasting: 5\nI0207 12:05:06.910780    1672 log.go:172] (0xc0006bc370) Reply frame received for 5\nI0207 12:05:07.034994    1672 log.go:172] (0xc0006bc370) Data frame received for 3\nI0207 12:05:07.035027    1672 log.go:172] (0xc000590fa0) (3) Data frame handling\nI0207 12:05:07.035040    1672 log.go:172] (0xc000590fa0) (3) Data frame sent\nI0207 12:05:07.205336    1672 log.go:172] (0xc0006bc370) Data frame received for 1\nI0207 12:05:07.205371    1672 log.go:172] (0xc0006da640) (1) Data frame handling\nI0207 12:05:07.205394    1672 log.go:172] (0xc0006da640) (1) Data frame sent\nI0207 12:05:07.205410    1672 log.go:172] (0xc0006bc370) (0xc000590fa0) Stream removed, broadcasting: 3\nI0207 12:05:07.205444    1672 log.go:172] (0xc0006bc370) (0xc0006da640) Stream removed, broadcasting: 1\nI0207 12:05:07.205548    1672 log.go:172] (0xc0006bc370) (0xc0006da6e0) Stream removed, broadcasting: 5\nI0207 12:05:07.205697    1672 log.go:172] (0xc0006bc370) (0xc0006da640) Stream removed, broadcasting: 1\nI0207 12:05:07.205714    1672 log.go:172] (0xc0006bc370) (0xc000590fa0) Stream removed, broadcasting: 3\nI0207 12:05:07.205721    1672 log.go:172] (0xc0006bc370) (0xc0006da6e0) Stream removed, broadcasting: 5\nI0207 12:05:07.205745    1672 log.go:172] (0xc0006bc370) Go away received\n"
Feb  7 12:05:07.210: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 12:05:07.210: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 12:05:07.210: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  7 12:05:37.435: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kfqxm
Feb  7 12:05:37.444: INFO: Scaling statefulset ss to 0
Feb  7 12:05:37.456: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 12:05:37.459: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:05:37.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-kfqxm" for this suite.
Feb  7 12:05:45.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:05:45.655: INFO: namespace: e2e-tests-statefulset-kfqxm, resource: bindings, ignored listing per whitelist
Feb  7 12:05:45.664: INFO: namespace e2e-tests-statefulset-kfqxm deletion completed in 8.173821694s

• [SLOW TEST:116.008 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:05:45.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:05:45.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-22dqb" for this suite.
Feb  7 12:06:10.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:06:10.343: INFO: namespace: e2e-tests-kubelet-test-22dqb, resource: bindings, ignored listing per whitelist
Feb  7 12:06:10.531: INFO: namespace e2e-tests-kubelet-test-22dqb deletion completed in 24.550932351s

• [SLOW TEST:24.866 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:06:10.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  7 12:06:10.741: INFO: Waiting up to 5m0s for pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-fjvjb" to be "success or failure"
Feb  7 12:06:10.757: INFO: Pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.352729ms
Feb  7 12:06:12.769: INFO: Pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028269326s
Feb  7 12:06:14.781: INFO: Pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040033687s
Feb  7 12:06:16.878: INFO: Pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13681712s
Feb  7 12:06:18.893: INFO: Pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152180419s
Feb  7 12:06:20.909: INFO: Pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168116037s
STEP: Saw pod success
Feb  7 12:06:20.909: INFO: Pod "pod-3abd54b7-49a2-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:06:20.914: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3abd54b7-49a2-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 12:06:21.016: INFO: Waiting for pod pod-3abd54b7-49a2-11ea-abae-0242ac110005 to disappear
Feb  7 12:06:21.074: INFO: Pod pod-3abd54b7-49a2-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:06:21.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fjvjb" for this suite.
Feb  7 12:06:27.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:06:27.255: INFO: namespace: e2e-tests-emptydir-fjvjb, resource: bindings, ignored listing per whitelist
Feb  7 12:06:27.296: INFO: namespace e2e-tests-emptydir-fjvjb deletion completed in 6.214623496s

• [SLOW TEST:16.764 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:06:27.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bsfkb
Feb  7 12:06:37.721: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bsfkb
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 12:06:37.734: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:10:39.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bsfkb" for this suite.
Feb  7 12:10:46.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:10:46.161: INFO: namespace: e2e-tests-container-probe-bsfkb, resource: bindings, ignored listing per whitelist
Feb  7 12:10:46.209: INFO: namespace e2e-tests-container-probe-bsfkb deletion completed in 6.388876857s

• [SLOW TEST:258.912 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:10:46.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-df1685d3-49a2-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 12:10:46.506: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-rk6bw" to be "success or failure"
Feb  7 12:10:46.536: INFO: Pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.259815ms
Feb  7 12:10:48.574: INFO: Pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067816893s
Feb  7 12:10:50.600: INFO: Pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093692233s
Feb  7 12:10:52.640: INFO: Pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134071355s
Feb  7 12:10:54.789: INFO: Pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282736929s
Feb  7 12:10:56.998: INFO: Pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.491338787s
STEP: Saw pod success
Feb  7 12:10:56.998: INFO: Pod "pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:10:57.027: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 12:10:57.147: INFO: Waiting for pod pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005 to disappear
Feb  7 12:10:57.174: INFO: Pod pod-projected-configmaps-df18a4c8-49a2-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:10:57.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rk6bw" for this suite.
Feb  7 12:11:03.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:11:03.439: INFO: namespace: e2e-tests-projected-rk6bw, resource: bindings, ignored listing per whitelist
Feb  7 12:11:03.448: INFO: namespace e2e-tests-projected-rk6bw deletion completed in 6.261677816s

• [SLOW TEST:17.239 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:11:03.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e96d909d-49a2-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 12:11:03.887: INFO: Waiting up to 5m0s for pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-6jr42" to be "success or failure"
Feb  7 12:11:03.920: INFO: Pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.605663ms
Feb  7 12:11:05.940: INFO: Pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052425355s
Feb  7 12:11:07.950: INFO: Pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062833021s
Feb  7 12:11:10.061: INFO: Pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173853305s
Feb  7 12:11:12.086: INFO: Pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198739679s
Feb  7 12:11:14.106: INFO: Pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218619448s
STEP: Saw pod success
Feb  7 12:11:14.106: INFO: Pod "pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:11:14.110: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  7 12:11:14.545: INFO: Waiting for pod pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005 to disappear
Feb  7 12:11:14.885: INFO: Pod pod-configmaps-e96ef901-49a2-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:11:14.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6jr42" for this suite.
Feb  7 12:11:20.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:11:21.202: INFO: namespace: e2e-tests-configmap-6jr42, resource: bindings, ignored listing per whitelist
Feb  7 12:11:21.217: INFO: namespace e2e-tests-configmap-6jr42 deletion completed in 6.308744017s

• [SLOW TEST:17.768 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:11:21.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  7 12:11:32.077: INFO: Successfully updated pod "labelsupdatef3e4b08c-49a2-11ea-abae-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:11:34.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f5vx8" for this suite.
Feb  7 12:11:58.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:11:58.544: INFO: namespace: e2e-tests-projected-f5vx8, resource: bindings, ignored listing per whitelist
Feb  7 12:11:58.588: INFO: namespace e2e-tests-projected-f5vx8 deletion completed in 24.358850458s

• [SLOW TEST:37.370 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:11:58.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-5n8cx
Feb  7 12:12:08.880: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-5n8cx
STEP: checking the pod's current state and verifying that restartCount is present
Feb  7 12:12:08.887: INFO: Initial restart count of pod liveness-http is 0
Feb  7 12:12:30.147: INFO: Restart count of pod e2e-tests-container-probe-5n8cx/liveness-http is now 1 (21.259321689s elapsed)
Feb  7 12:12:48.416: INFO: Restart count of pod e2e-tests-container-probe-5n8cx/liveness-http is now 2 (39.528245485s elapsed)
Feb  7 12:13:11.225: INFO: Restart count of pod e2e-tests-container-probe-5n8cx/liveness-http is now 3 (1m2.338017006s elapsed)
Feb  7 12:13:31.423: INFO: Restart count of pod e2e-tests-container-probe-5n8cx/liveness-http is now 4 (1m22.535472731s elapsed)
Feb  7 12:14:40.161: INFO: Restart count of pod e2e-tests-container-probe-5n8cx/liveness-http is now 5 (2m31.273829127s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:14:40.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5n8cx" for this suite.
Feb  7 12:14:46.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:14:46.432: INFO: namespace: e2e-tests-container-probe-5n8cx, resource: bindings, ignored listing per whitelist
Feb  7 12:14:46.555: INFO: namespace e2e-tests-container-probe-5n8cx deletion completed in 6.228304529s

• [SLOW TEST:167.967 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:14:46.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-6e5cbb95-49a3-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:14:46.855: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-6fjp5" to be "success or failure"
Feb  7 12:14:46.871: INFO: Pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.62854ms
Feb  7 12:14:48.888: INFO: Pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033014016s
Feb  7 12:14:50.915: INFO: Pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060805515s
Feb  7 12:14:52.936: INFO: Pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081066022s
Feb  7 12:14:54.955: INFO: Pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099918561s
Feb  7 12:14:56.979: INFO: Pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124811238s
STEP: Saw pod success
Feb  7 12:14:56.980: INFO: Pod "pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:14:56.991: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 12:14:57.153: INFO: Waiting for pod pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005 to disappear
Feb  7 12:14:57.165: INFO: Pod pod-projected-secrets-6e5e043b-49a3-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:14:57.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6fjp5" for this suite.
Feb  7 12:15:03.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:15:03.330: INFO: namespace: e2e-tests-projected-6fjp5, resource: bindings, ignored listing per whitelist
Feb  7 12:15:03.341: INFO: namespace e2e-tests-projected-6fjp5 deletion completed in 6.169820582s

• [SLOW TEST:16.784 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:15:03.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  7 12:15:03.557: INFO: Waiting up to 5m0s for pod "pod-78514d8e-49a3-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-9ghsc" to be "success or failure"
Feb  7 12:15:03.674: INFO: Pod "pod-78514d8e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 116.297621ms
Feb  7 12:15:06.231: INFO: Pod "pod-78514d8e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.673076855s
Feb  7 12:15:08.251: INFO: Pod "pod-78514d8e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.693163374s
Feb  7 12:15:10.410: INFO: Pod "pod-78514d8e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.852510983s
Feb  7 12:15:12.445: INFO: Pod "pod-78514d8e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.88773422s
Feb  7 12:15:14.477: INFO: Pod "pod-78514d8e-49a3-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.919749445s
STEP: Saw pod success
Feb  7 12:15:14.477: INFO: Pod "pod-78514d8e-49a3-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:15:14.502: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-78514d8e-49a3-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 12:15:14.931: INFO: Waiting for pod pod-78514d8e-49a3-11ea-abae-0242ac110005 to disappear
Feb  7 12:15:14.948: INFO: Pod pod-78514d8e-49a3-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:15:14.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9ghsc" for this suite.
Feb  7 12:15:20.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:15:21.083: INFO: namespace: e2e-tests-emptydir-9ghsc, resource: bindings, ignored listing per whitelist
Feb  7 12:15:21.159: INFO: namespace e2e-tests-emptydir-9ghsc deletion completed in 6.201489273s

• [SLOW TEST:17.818 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:15:21.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-82f2089b-49a3-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 12:15:21.429: INFO: Waiting up to 5m0s for pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-6776t" to be "success or failure"
Feb  7 12:15:21.529: INFO: Pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.588873ms
Feb  7 12:15:24.156: INFO: Pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.726913695s
Feb  7 12:15:26.183: INFO: Pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.753894s
Feb  7 12:15:28.282: INFO: Pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.852866372s
Feb  7 12:15:30.712: INFO: Pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.282610675s
Feb  7 12:15:32.735: INFO: Pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.306349839s
STEP: Saw pod success
Feb  7 12:15:32.736: INFO: Pod "pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:15:32.751: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  7 12:15:32.910: INFO: Waiting for pod pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005 to disappear
Feb  7 12:15:32.920: INFO: Pod pod-configmaps-82f4df97-49a3-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:15:32.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6776t" for this suite.
Feb  7 12:15:38.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:15:39.028: INFO: namespace: e2e-tests-configmap-6776t, resource: bindings, ignored listing per whitelist
Feb  7 12:15:39.145: INFO: namespace e2e-tests-configmap-6776t deletion completed in 6.218609408s

• [SLOW TEST:17.986 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:15:39.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 12:15:39.370: INFO: Creating ReplicaSet my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005
Feb  7 12:15:39.398: INFO: Pod name my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005: Found 0 pods out of 1
Feb  7 12:15:44.411: INFO: Pod name my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005: Found 1 pods out of 1
Feb  7 12:15:44.411: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005" is running
Feb  7 12:15:50.433: INFO: Pod "my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005-slftl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:15:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:15:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:15:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:15:39 +0000 UTC Reason: Message:}])
Feb  7 12:15:50.433: INFO: Trying to dial the pod
Feb  7 12:15:55.493: INFO: Controller my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005: Got expected result from replica 1 [my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005-slftl]: "my-hostname-basic-8dadf074-49a3-11ea-abae-0242ac110005-slftl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:15:55.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-qqmkc" for this suite.
Feb  7 12:16:01.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:16:01.643: INFO: namespace: e2e-tests-replicaset-qqmkc, resource: bindings, ignored listing per whitelist
Feb  7 12:16:01.809: INFO: namespace e2e-tests-replicaset-qqmkc deletion completed in 6.302993293s

• [SLOW TEST:22.664 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:16:01.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  7 12:16:02.077: INFO: Waiting up to 5m0s for pod "pod-9b326862-49a3-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-zclmk" to be "success or failure"
Feb  7 12:16:02.097: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.700915ms
Feb  7 12:16:04.529: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45137276s
Feb  7 12:16:06.583: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505707528s
Feb  7 12:16:08.606: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528985747s
Feb  7 12:16:10.850: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.773065381s
Feb  7 12:16:12.873: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.79561736s
Feb  7 12:16:14.923: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.846131547s
STEP: Saw pod success
Feb  7 12:16:14.924: INFO: Pod "pod-9b326862-49a3-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:16:14.939: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9b326862-49a3-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 12:16:15.216: INFO: Waiting for pod pod-9b326862-49a3-11ea-abae-0242ac110005 to disappear
Feb  7 12:16:15.242: INFO: Pod pod-9b326862-49a3-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:16:15.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zclmk" for this suite.
Feb  7 12:16:23.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:16:23.414: INFO: namespace: e2e-tests-emptydir-zclmk, resource: bindings, ignored listing per whitelist
Feb  7 12:16:23.495: INFO: namespace e2e-tests-emptydir-zclmk deletion completed in 8.24297861s

• [SLOW TEST:21.686 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:16:23.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 12:16:32.187: INFO: Waiting up to 5m0s for pod "client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005" in namespace "e2e-tests-pods-s7j4x" to be "success or failure"
Feb  7 12:16:32.242: INFO: Pod "client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.413675ms
Feb  7 12:16:34.455: INFO: Pod "client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2686046s
Feb  7 12:16:36.473: INFO: Pod "client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286146153s
Feb  7 12:16:38.761: INFO: Pod "client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574170108s
Feb  7 12:16:40.779: INFO: Pod "client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.592265837s
STEP: Saw pod success
Feb  7 12:16:40.779: INFO: Pod "client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:16:40.782: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005 container env3cont: 
STEP: delete the pod
Feb  7 12:16:41.231: INFO: Waiting for pod client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005 to disappear
Feb  7 12:16:41.282: INFO: Pod client-envvars-ad21f47e-49a3-11ea-abae-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:16:41.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-s7j4x" for this suite.
Feb  7 12:17:35.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:17:35.482: INFO: namespace: e2e-tests-pods-s7j4x, resource: bindings, ignored listing per whitelist
Feb  7 12:17:35.668: INFO: namespace e2e-tests-pods-s7j4x deletion completed in 54.292877996s

• [SLOW TEST:72.172 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:17:35.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  7 12:17:35.926: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-nlg48,SelfLink:/api/v1/namespaces/e2e-tests-watch-nlg48/configmaps/e2e-watch-test-watch-closed,UID:d315cc02-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862138,Generation:0,CreationTimestamp:2020-02-07 12:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 12:17:35.926: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-nlg48,SelfLink:/api/v1/namespaces/e2e-tests-watch-nlg48/configmaps/e2e-watch-test-watch-closed,UID:d315cc02-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862139,Generation:0,CreationTimestamp:2020-02-07 12:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  7 12:17:35.955: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-nlg48,SelfLink:/api/v1/namespaces/e2e-tests-watch-nlg48/configmaps/e2e-watch-test-watch-closed,UID:d315cc02-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862140,Generation:0,CreationTimestamp:2020-02-07 12:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 12:17:35.956: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-nlg48,SelfLink:/api/v1/namespaces/e2e-tests-watch-nlg48/configmaps/e2e-watch-test-watch-closed,UID:d315cc02-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862141,Generation:0,CreationTimestamp:2020-02-07 12:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:17:35.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-nlg48" for this suite.
Feb  7 12:17:42.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:17:42.219: INFO: namespace: e2e-tests-watch-nlg48, resource: bindings, ignored listing per whitelist
Feb  7 12:17:42.274: INFO: namespace e2e-tests-watch-nlg48 deletion completed in 6.275183764s

• [SLOW TEST:6.606 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:17:42.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  7 12:17:42.571: INFO: Waiting up to 5m0s for pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-ggr4m" to be "success or failure"
Feb  7 12:17:42.598: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.842115ms
Feb  7 12:17:44.613: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041720117s
Feb  7 12:17:46.637: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065268526s
Feb  7 12:17:48.670: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09881731s
Feb  7 12:17:50.707: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135954526s
Feb  7 12:17:52.803: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.231732523s
Feb  7 12:17:54.974: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.403143286s
STEP: Saw pod success
Feb  7 12:17:54.975: INFO: Pod "downward-api-d7167da3-49a3-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:17:54.996: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d7167da3-49a3-11ea-abae-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  7 12:17:55.198: INFO: Waiting for pod downward-api-d7167da3-49a3-11ea-abae-0242ac110005 to disappear
Feb  7 12:17:55.213: INFO: Pod downward-api-d7167da3-49a3-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:17:55.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ggr4m" for this suite.
Feb  7 12:18:01.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:18:01.537: INFO: namespace: e2e-tests-downward-api-ggr4m, resource: bindings, ignored listing per whitelist
Feb  7 12:18:01.649: INFO: namespace e2e-tests-downward-api-ggr4m deletion completed in 6.418519601s

• [SLOW TEST:19.374 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:18:01.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 12:18:02.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb  7 12:18:02.158: INFO: stderr: ""
Feb  7 12:18:02.158: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb  7 12:18:02.167: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:18:02.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4n2bb" for this suite.
Feb  7 12:18:08.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:18:08.404: INFO: namespace: e2e-tests-kubectl-4n2bb, resource: bindings, ignored listing per whitelist
Feb  7 12:18:08.422: INFO: namespace e2e-tests-kubectl-4n2bb deletion completed in 6.207443076s

S [SKIPPING] [6.773 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb  7 12:18:02.167: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:18:08.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 12:18:08.830: INFO: Creating deployment "nginx-deployment"
Feb  7 12:18:08.853: INFO: Waiting for observed generation 1
Feb  7 12:18:11.648: INFO: Waiting for all required pods to come up
Feb  7 12:18:12.066: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  7 12:18:49.320: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  7 12:18:49.334: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  7 12:18:49.353: INFO: Updating deployment nginx-deployment
Feb  7 12:18:49.353: INFO: Waiting for observed generation 2
Feb  7 12:18:51.376: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  7 12:18:51.380: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  7 12:18:51.384: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  7 12:18:51.399: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  7 12:18:51.399: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  7 12:18:52.935: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  7 12:18:53.020: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  7 12:18:53.020: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  7 12:18:53.569: INFO: Updating deployment nginx-deployment
Feb  7 12:18:53.569: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  7 12:18:53.946: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  7 12:18:53.983: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  7 12:18:56.846: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x2n9s/deployments/nginx-deployment,UID:e6c44737-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862446,Generation:3,CreationTimestamp:2020-02-07 12:18:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-07 12:18:50 +0000 UTC 2020-02-07 12:18:08 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-07 12:18:53 +0000 UTC 2020-02-07 12:18:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  7 12:18:57.471: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x2n9s/replicasets/nginx-deployment-5c98f8fb5,UID:feec559d-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862435,Generation:3,CreationTimestamp:2020-02-07 12:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e6c44737-49a3-11ea-a994-fa163e34d433 0xc001ba3917 0xc001ba3918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 12:18:57.471: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  7 12:18:57.471: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x2n9s/replicasets/nginx-deployment-85ddf47c5d,UID:e6caeb34-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862433,Generation:3,CreationTimestamp:2020-02-07 12:18:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e6c44737-49a3-11ea-a994-fa163e34d433 0xc001ba39d7 0xc001ba39d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  7 12:18:59.637: INFO: Pod "nginx-deployment-5c98f8fb5-2gfhm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2gfhm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-2gfhm,UID:ff0443bb-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862427,Generation:0,CreationTimestamp:2020-02-07 12:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc001121f77 0xc001121f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c2120} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c2140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-07 12:18:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.637: INFO: Pod "nginx-deployment-5c98f8fb5-484lz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-484lz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-484lz,UID:026c130b-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862474,Generation:0,CreationTimestamp:2020-02-07 12:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c22a7 0xc0014c22a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c23d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c2400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.638: INFO: Pod "nginx-deployment-5c98f8fb5-4dr76" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4dr76,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-4dr76,UID:031e9f4f-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862489,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c2727 0xc0014c2728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c27a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c27c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.638: INFO: Pod "nginx-deployment-5c98f8fb5-8t2nr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8t2nr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-8t2nr,UID:031efeee-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862485,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c2847 0xc0014c2848}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c2940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c2980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.638: INFO: Pod "nginx-deployment-5c98f8fb5-8vcmx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8vcmx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-8vcmx,UID:ff046c8c-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862429,Generation:0,CreationTimestamp:2020-02-07 12:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c29f7 0xc0014c29f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c2b00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c2b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-07 12:18:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.638: INFO: Pod "nginx-deployment-5c98f8fb5-8x4n2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8x4n2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-8x4n2,UID:01b06934-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862448,Generation:0,CreationTimestamp:2020-02-07 12:18:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c2c97 0xc0014c2c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c2d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c2e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.639: INFO: Pod "nginx-deployment-5c98f8fb5-bbb2m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bbb2m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-bbb2m,UID:ff44135f-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862457,Generation:0,CreationTimestamp:2020-02-07 12:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c3387 0xc0014c3388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c3450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c34a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-07 12:18:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.639: INFO: Pod "nginx-deployment-5c98f8fb5-g57x4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g57x4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-g57x4,UID:ff2f3b6a-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862432,Generation:0,CreationTimestamp:2020-02-07 12:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c35e7 0xc0014c35e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c3750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c37b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-07 12:18:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.639: INFO: Pod "nginx-deployment-5c98f8fb5-qqrf8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qqrf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-qqrf8,UID:031f0e6e-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862487,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc0014c3937 0xc0014c3938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014c3a00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014c3a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.639: INFO: Pod "nginx-deployment-5c98f8fb5-rkl66" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rkl66,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-rkl66,UID:0385447a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862490,Generation:0,CreationTimestamp:2020-02-07 12:18:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc00140a027 0xc00140a028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140a100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140a120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.640: INFO: Pod "nginx-deployment-5c98f8fb5-vchfj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vchfj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-vchfj,UID:026c2b57-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862473,Generation:0,CreationTimestamp:2020-02-07 12:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc00140a217 0xc00140a218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140a280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140a2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.640: INFO: Pod "nginx-deployment-5c98f8fb5-vvd9m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vvd9m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-vvd9m,UID:ff029a90-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862425,Generation:0,CreationTimestamp:2020-02-07 12:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc00140a317 0xc00140a318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140a400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140a420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-07 12:18:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.640: INFO: Pod "nginx-deployment-5c98f8fb5-vxmdj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vxmdj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-5c98f8fb5-vxmdj,UID:031d836e-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862481,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 feec559d-49a3-11ea-a994-fa163e34d433 0xc00140a4f7 0xc00140a4f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140a680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140a6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.640: INFO: Pod "nginx-deployment-85ddf47c5d-77b96" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-77b96,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-77b96,UID:e6d9fe3e-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862315,Generation:0,CreationTimestamp:2020-02-07 12:18:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc00140a797 0xc00140a798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140a800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140a830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-07 12:18:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1e6a4f1c9b19b0cd0e5fc23fec508fc3acac6caa28f199ddd8dad940ee32952d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.641: INFO: Pod "nginx-deployment-85ddf47c5d-878s2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-878s2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-878s2,UID:e6e19b9a-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862344,Generation:0,CreationTimestamp:2020-02-07 12:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc00140ab57 0xc00140ab58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140ac30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140afd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-07 12:18:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://35666227f7f878a48fcd717b32def815c19fb4fc61a2a921f9f4a309b38d6379}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.641: INFO: Pod "nginx-deployment-85ddf47c5d-8g5z7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8g5z7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-8g5z7,UID:026c03a9-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862469,Generation:0,CreationTimestamp:2020-02-07 12:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc00140b257 0xc00140b258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140b2c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140b2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.641: INFO: Pod "nginx-deployment-85ddf47c5d-8rk5b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8rk5b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-8rk5b,UID:01a6fd9a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862440,Generation:0,CreationTimestamp:2020-02-07 12:18:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc00140b5e7 0xc00140b5e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140b6e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140b700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.641: INFO: Pod "nginx-deployment-85ddf47c5d-9x4mw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9x4mw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-9x4mw,UID:031f2dbc-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862492,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc00140b7f7 0xc00140b7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140ba20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140ba40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.642: INFO: Pod "nginx-deployment-85ddf47c5d-bhpx8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bhpx8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-bhpx8,UID:e6fab870-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862341,Generation:0,CreationTimestamp:2020-02-07 12:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc00140bd07 0xc00140bd08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00140bd90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00140bdb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-07 12:18:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3be2f2704de7662271aae5c1c4cdcf3130784a2a815d64b20697de19c7ea8ded}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.643: INFO: Pod "nginx-deployment-85ddf47c5d-bxw7k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bxw7k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-bxw7k,UID:e70b6e1b-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862358,Generation:0,CreationTimestamp:2020-02-07 12:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b2747 0xc0009b2748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009b27b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009b2850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-07 12:18:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d846a54cb48a7b8ffd9437c4a7c1f7581001ec1cb8a2d40136200944cb1db2b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.643: INFO: Pod "nginx-deployment-85ddf47c5d-chztm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-chztm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-chztm,UID:026c2d82-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862476,Generation:0,CreationTimestamp:2020-02-07 12:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b2957 0xc0009b2958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009b29f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009b2a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.643: INFO: Pod "nginx-deployment-85ddf47c5d-fz7h6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fz7h6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-fz7h6,UID:01b07d00-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862454,Generation:0,CreationTimestamp:2020-02-07 12:18:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b2b47 0xc0009b2b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009b2c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009b2c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.644: INFO: Pod "nginx-deployment-85ddf47c5d-gkkns" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gkkns,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-gkkns,UID:e70bc860-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862360,Generation:0,CreationTimestamp:2020-02-07 12:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b2df7 0xc0009b2df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009b2e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009b2ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-07 12:18:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://912955d5f700d2ab151cf27acf6343a40881e87f2cd432d73311929510948d92}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.644: INFO: Pod "nginx-deployment-85ddf47c5d-np5xb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-np5xb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-np5xb,UID:e6face3c-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862367,Generation:0,CreationTimestamp:2020-02-07 12:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b3087 0xc0009b3088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009b30f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009b3110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-07 12:18:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8392fc2d448068c4f0f72888d174c076d7c92f70194b9f72b5429ed84f399a03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.645: INFO: Pod "nginx-deployment-85ddf47c5d-npvsb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-npvsb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-npvsb,UID:01afb51f-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862445,Generation:0,CreationTimestamp:2020-02-07 12:18:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b3ad7 0xc0009b3ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009b3b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009b3b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.645: INFO: Pod "nginx-deployment-85ddf47c5d-plxlk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-plxlk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-plxlk,UID:031e5845-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862488,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b3c67 0xc0009b3c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009b3cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009b3cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.645: INFO: Pod "nginx-deployment-85ddf47c5d-q4grq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q4grq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-q4grq,UID:031ecf08-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862486,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc0009b3dd7 0xc0009b3dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c4e090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c4e0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.646: INFO: Pod "nginx-deployment-85ddf47c5d-sc597" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sc597,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-sc597,UID:e6e1e119-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862364,Generation:0,CreationTimestamp:2020-02-07 12:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc000c4e417 0xc000c4e418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c4e480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c4e4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-07 12:18:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://00ab8f82eb6e97a23fca62bde7bc70557481c17063b4bfc6735bcc93149164e1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.646: INFO: Pod "nginx-deployment-85ddf47c5d-spl6f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-spl6f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-spl6f,UID:026a67ba-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862470,Generation:0,CreationTimestamp:2020-02-07 12:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc000c4e9a7 0xc000c4e9a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c4eae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c4eb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.646: INFO: Pod "nginx-deployment-85ddf47c5d-tgp58" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tgp58,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-tgp58,UID:026aef94-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862471,Generation:0,CreationTimestamp:2020-02-07 12:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc000c4ed07 0xc000c4ed08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c4ed70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c4ee50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.646: INFO: Pod "nginx-deployment-85ddf47c5d-wn2w9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wn2w9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-wn2w9,UID:031e5154-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862491,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc000c4ef37 0xc000c4ef38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c4f230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c4f290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.647: INFO: Pod "nginx-deployment-85ddf47c5d-zr4pj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zr4pj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-zr4pj,UID:e70c3a66-49a3-11ea-a994-fa163e34d433,ResourceVersion:20862355,Generation:0,CreationTimestamp:2020-02-07 12:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc000c4f3c7 0xc000c4f3c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c4f430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c4f5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-07 12:18:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-07 12:18:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://71f785fc1418174e0d6d30e163fe028f5a96cf4afc6549b5c21479c1053f0ae2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  7 12:18:59.647: INFO: Pod "nginx-deployment-85ddf47c5d-zrpx2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zrpx2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-x2n9s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2n9s/pods/nginx-deployment-85ddf47c5d-zrpx2,UID:031d1ba2-49a4-11ea-a994-fa163e34d433,ResourceVersion:20862482,Generation:0,CreationTimestamp:2020-02-07 12:18:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d e6caeb34-49a3-11ea-a994-fa163e34d433 0xc000c4f737 0xc000c4f738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vq2gh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vq2gh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vq2gh true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c4f810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c4f860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 12:18:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:18:59.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-x2n9s" for this suite.
Feb  7 12:19:34.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:19:43.511: INFO: namespace: e2e-tests-deployment-x2n9s, resource: bindings, ignored listing per whitelist
Feb  7 12:19:46.045: INFO: namespace e2e-tests-deployment-x2n9s deletion completed in 45.793620494s

• [SLOW TEST:97.624 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:19:46.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  7 12:19:58.401: INFO: 10 pods remaining
Feb  7 12:19:58.401: INFO: 10 pods has nil DeletionTimestamp
Feb  7 12:19:58.401: INFO: 
Feb  7 12:20:00.388: INFO: 10 pods remaining
Feb  7 12:20:00.388: INFO: 9 pods has nil DeletionTimestamp
Feb  7 12:20:00.388: INFO: 
Feb  7 12:20:03.309: INFO: 0 pods remaining
Feb  7 12:20:03.309: INFO: 0 pods has nil DeletionTimestamp
Feb  7 12:20:03.309: INFO: 
Feb  7 12:20:04.668: INFO: 0 pods remaining
Feb  7 12:20:04.668: INFO: 0 pods has nil DeletionTimestamp
Feb  7 12:20:04.668: INFO: 
STEP: Gathering metrics
W0207 12:20:06.265591       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  7 12:20:06.265: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:20:06.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-gnrvn" for this suite.
Feb  7 12:20:28.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:20:28.755: INFO: namespace: e2e-tests-gc-gnrvn, resource: bindings, ignored listing per whitelist
Feb  7 12:20:30.062: INFO: namespace e2e-tests-gc-gnrvn deletion completed in 23.788346144s

• [SLOW TEST:44.016 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:20:30.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb  7 12:20:32.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:36.225: INFO: stderr: ""
Feb  7 12:20:36.225: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 12:20:36.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:36.833: INFO: stderr: ""
Feb  7 12:20:36.833: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb  7 12:20:41.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:42.464: INFO: stderr: ""
Feb  7 12:20:42.464: INFO: stdout: "update-demo-nautilus-ccrpm update-demo-nautilus-sdv92 "
Feb  7 12:20:42.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccrpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:42.987: INFO: stderr: ""
Feb  7 12:20:42.987: INFO: stdout: ""
Feb  7 12:20:42.987: INFO: update-demo-nautilus-ccrpm is created but not running
Feb  7 12:20:47.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:48.318: INFO: stderr: ""
Feb  7 12:20:48.318: INFO: stdout: "update-demo-nautilus-ccrpm update-demo-nautilus-sdv92 "
Feb  7 12:20:48.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccrpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:48.393: INFO: stderr: ""
Feb  7 12:20:48.393: INFO: stdout: ""
Feb  7 12:20:48.393: INFO: update-demo-nautilus-ccrpm is created but not running
Feb  7 12:20:53.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:53.549: INFO: stderr: ""
Feb  7 12:20:53.549: INFO: stdout: "update-demo-nautilus-ccrpm update-demo-nautilus-sdv92 "
Feb  7 12:20:53.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccrpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:53.710: INFO: stderr: ""
Feb  7 12:20:53.710: INFO: stdout: ""
Feb  7 12:20:53.710: INFO: update-demo-nautilus-ccrpm is created but not running
Feb  7 12:20:58.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:58.847: INFO: stderr: ""
Feb  7 12:20:58.847: INFO: stdout: "update-demo-nautilus-ccrpm update-demo-nautilus-sdv92 "
Feb  7 12:20:58.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccrpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:20:59.096: INFO: stderr: ""
Feb  7 12:20:59.096: INFO: stdout: ""
Feb  7 12:20:59.096: INFO: update-demo-nautilus-ccrpm is created but not running
Feb  7 12:21:04.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:04.235: INFO: stderr: ""
Feb  7 12:21:04.235: INFO: stdout: "update-demo-nautilus-ccrpm update-demo-nautilus-sdv92 "
Feb  7 12:21:04.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccrpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:04.333: INFO: stderr: ""
Feb  7 12:21:04.333: INFO: stdout: "true"
Feb  7 12:21:04.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccrpm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:04.436: INFO: stderr: ""
Feb  7 12:21:04.436: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 12:21:04.436: INFO: validating pod update-demo-nautilus-ccrpm
Feb  7 12:21:04.457: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 12:21:04.457: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 12:21:04.457: INFO: update-demo-nautilus-ccrpm is verified up and running
Feb  7 12:21:04.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sdv92 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:04.587: INFO: stderr: ""
Feb  7 12:21:04.587: INFO: stdout: "true"
Feb  7 12:21:04.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sdv92 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:04.684: INFO: stderr: ""
Feb  7 12:21:04.685: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 12:21:04.685: INFO: validating pod update-demo-nautilus-sdv92
Feb  7 12:21:04.717: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 12:21:04.718: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 12:21:04.718: INFO: update-demo-nautilus-sdv92 is verified up and running
STEP: rolling-update to new replication controller
Feb  7 12:21:04.724: INFO: scanned /root for discovery docs: 
Feb  7 12:21:04.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:41.975: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  7 12:21:41.975: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 12:21:41.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:42.214: INFO: stderr: ""
Feb  7 12:21:42.214: INFO: stdout: "update-demo-kitten-kzjcj update-demo-kitten-vm99x update-demo-nautilus-ccrpm "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb  7 12:21:47.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:47.398: INFO: stderr: ""
Feb  7 12:21:47.398: INFO: stdout: "update-demo-kitten-kzjcj update-demo-kitten-vm99x "
Feb  7 12:21:47.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kzjcj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:47.571: INFO: stderr: ""
Feb  7 12:21:47.571: INFO: stdout: "true"
Feb  7 12:21:47.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kzjcj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:47.702: INFO: stderr: ""
Feb  7 12:21:47.702: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  7 12:21:47.702: INFO: validating pod update-demo-kitten-kzjcj
Feb  7 12:21:47.727: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  7 12:21:47.727: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  7 12:21:47.727: INFO: update-demo-kitten-kzjcj is verified up and running
Feb  7 12:21:47.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vm99x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:47.850: INFO: stderr: ""
Feb  7 12:21:47.850: INFO: stdout: "true"
Feb  7 12:21:47.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vm99x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hpbf2'
Feb  7 12:21:47.982: INFO: stderr: ""
Feb  7 12:21:47.982: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  7 12:21:47.982: INFO: validating pod update-demo-kitten-vm99x
Feb  7 12:21:47.994: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  7 12:21:47.994: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  7 12:21:47.994: INFO: update-demo-kitten-vm99x is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:21:47.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hpbf2" for this suite.
Feb  7 12:22:16.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:22:16.145: INFO: namespace: e2e-tests-kubectl-hpbf2, resource: bindings, ignored listing per whitelist
Feb  7 12:22:16.287: INFO: namespace e2e-tests-kubectl-hpbf2 deletion completed in 28.288148789s

• [SLOW TEST:106.224 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:22:16.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  7 12:22:16.622: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m2czz,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2czz/configmaps/e2e-watch-test-label-changed,UID:7a6ed17a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20863125,Generation:0,CreationTimestamp:2020-02-07 12:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 12:22:16.622: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m2czz,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2czz/configmaps/e2e-watch-test-label-changed,UID:7a6ed17a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20863126,Generation:0,CreationTimestamp:2020-02-07 12:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  7 12:22:16.622: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m2czz,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2czz/configmaps/e2e-watch-test-label-changed,UID:7a6ed17a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20863127,Generation:0,CreationTimestamp:2020-02-07 12:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  7 12:22:26.850: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m2czz,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2czz/configmaps/e2e-watch-test-label-changed,UID:7a6ed17a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20863141,Generation:0,CreationTimestamp:2020-02-07 12:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 12:22:26.851: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m2czz,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2czz/configmaps/e2e-watch-test-label-changed,UID:7a6ed17a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20863142,Generation:0,CreationTimestamp:2020-02-07 12:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  7 12:22:26.851: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-m2czz,SelfLink:/api/v1/namespaces/e2e-tests-watch-m2czz/configmaps/e2e-watch-test-label-changed,UID:7a6ed17a-49a4-11ea-a994-fa163e34d433,ResourceVersion:20863143,Generation:0,CreationTimestamp:2020-02-07 12:22:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:22:26.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-m2czz" for this suite.
Feb  7 12:22:32.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:22:33.070: INFO: namespace: e2e-tests-watch-m2czz, resource: bindings, ignored listing per whitelist
Feb  7 12:22:33.150: INFO: namespace e2e-tests-watch-m2czz deletion completed in 6.292672389s

• [SLOW TEST:16.863 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:22:33.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 12:22:33.361: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.46354ms)
Feb  7 12:22:33.371: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.855635ms)
Feb  7 12:22:33.436: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 64.771403ms)
Feb  7 12:22:33.452: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.144386ms)
Feb  7 12:22:33.461: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.83798ms)
Feb  7 12:22:33.473: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.657544ms)
Feb  7 12:22:33.480: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.938191ms)
Feb  7 12:22:33.488: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.854703ms)
Feb  7 12:22:33.495: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.679836ms)
Feb  7 12:22:33.502: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.140672ms)
Feb  7 12:22:33.519: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.39315ms)
Feb  7 12:22:33.530: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.016961ms)
Feb  7 12:22:33.537: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.947971ms)
Feb  7 12:22:33.544: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.687922ms)
Feb  7 12:22:33.550: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.079281ms)
Feb  7 12:22:33.557: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.132616ms)
Feb  7 12:22:33.566: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.567403ms)
Feb  7 12:22:33.575: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.497926ms)
Feb  7 12:22:33.581: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.095724ms)
Feb  7 12:22:33.594: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.854909ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:22:33.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-85b2m" for this suite.
Feb  7 12:22:39.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:22:39.826: INFO: namespace: e2e-tests-proxy-85b2m, resource: bindings, ignored listing per whitelist
Feb  7 12:22:39.916: INFO: namespace e2e-tests-proxy-85b2m deletion completed in 6.31326291s

• [SLOW TEST:6.766 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:22:39.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 12:22:40.263: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 12:22:40.396: INFO: Number of nodes with available pods: 0
Feb  7 12:22:40.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:42.156: INFO: Number of nodes with available pods: 0
Feb  7 12:22:42.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:42.416: INFO: Number of nodes with available pods: 0
Feb  7 12:22:42.416: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:43.408: INFO: Number of nodes with available pods: 0
Feb  7 12:22:43.408: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:44.420: INFO: Number of nodes with available pods: 0
Feb  7 12:22:44.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:45.721: INFO: Number of nodes with available pods: 0
Feb  7 12:22:45.721: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:46.487: INFO: Number of nodes with available pods: 0
Feb  7 12:22:46.487: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:47.422: INFO: Number of nodes with available pods: 0
Feb  7 12:22:47.422: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:48.407: INFO: Number of nodes with available pods: 0
Feb  7 12:22:48.407: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:22:49.483: INFO: Number of nodes with available pods: 1
Feb  7 12:22:49.483: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  7 12:22:49.636: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:50.715: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:51.675: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:52.666: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:53.999: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:54.671: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:55.666: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:55.666: INFO: Pod daemon-set-h79hq is not available
Feb  7 12:22:56.684: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:56.684: INFO: Pod daemon-set-h79hq is not available
Feb  7 12:22:57.852: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:57.852: INFO: Pod daemon-set-h79hq is not available
Feb  7 12:22:58.665: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:58.665: INFO: Pod daemon-set-h79hq is not available
Feb  7 12:22:59.658: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:22:59.658: INFO: Pod daemon-set-h79hq is not available
Feb  7 12:23:00.665: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:23:00.665: INFO: Pod daemon-set-h79hq is not available
Feb  7 12:23:01.663: INFO: Wrong image for pod: daemon-set-h79hq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  7 12:23:01.663: INFO: Pod daemon-set-h79hq is not available
Feb  7 12:23:02.759: INFO: Pod daemon-set-vrn97 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  7 12:23:02.809: INFO: Number of nodes with available pods: 0
Feb  7 12:23:02.809: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:04.103: INFO: Number of nodes with available pods: 0
Feb  7 12:23:04.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:04.839: INFO: Number of nodes with available pods: 0
Feb  7 12:23:04.839: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:05.837: INFO: Number of nodes with available pods: 0
Feb  7 12:23:05.837: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:06.900: INFO: Number of nodes with available pods: 0
Feb  7 12:23:06.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:07.909: INFO: Number of nodes with available pods: 0
Feb  7 12:23:07.909: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:08.899: INFO: Number of nodes with available pods: 0
Feb  7 12:23:08.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:09.860: INFO: Number of nodes with available pods: 0
Feb  7 12:23:09.860: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:10.834: INFO: Number of nodes with available pods: 0
Feb  7 12:23:10.834: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:23:12.011: INFO: Number of nodes with available pods: 1
Feb  7 12:23:12.011: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dxglz, will wait for the garbage collector to delete the pods
Feb  7 12:23:12.182: INFO: Deleting DaemonSet.extensions daemon-set took: 45.696893ms
Feb  7 12:23:12.282: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.461138ms
Feb  7 12:23:19.418: INFO: Number of nodes with available pods: 0
Feb  7 12:23:19.418: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 12:23:19.425: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dxglz/daemonsets","resourceVersion":"20863268"},"items":null}

Feb  7 12:23:19.429: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dxglz/pods","resourceVersion":"20863268"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:23:19.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dxglz" for this suite.
Feb  7 12:23:25.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:23:25.677: INFO: namespace: e2e-tests-daemonsets-dxglz, resource: bindings, ignored listing per whitelist
Feb  7 12:23:25.725: INFO: namespace e2e-tests-daemonsets-dxglz deletion completed in 6.274177489s

• [SLOW TEST:45.808 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:23:25.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  7 12:23:46.329: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:23:46.345: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:23:48.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:23:48.374: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:23:50.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:23:50.367: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:23:52.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:23:52.360: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:23:54.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:23:54.368: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:23:56.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:23:56.360: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:23:58.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:23:58.372: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:00.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:00.475: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:02.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:02.392: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:04.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:04.781: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:06.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:06.537: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:08.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:08.360: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:10.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:10.379: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:12.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:12.360: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  7 12:24:14.345: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  7 12:24:14.367: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:24:14.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zlfck" for this suite.
Feb  7 12:24:38.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:24:38.541: INFO: namespace: e2e-tests-container-lifecycle-hook-zlfck, resource: bindings, ignored listing per whitelist
Feb  7 12:24:38.734: INFO: namespace e2e-tests-container-lifecycle-hook-zlfck deletion completed in 24.312424775s

• [SLOW TEST:73.009 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:24:38.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  7 12:24:51.664: INFO: Successfully updated pod "annotationupdatecf47b01c-49a4-11ea-abae-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:24:53.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2clrk" for this suite.
Feb  7 12:25:17.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:25:18.027: INFO: namespace: e2e-tests-projected-2clrk, resource: bindings, ignored listing per whitelist
Feb  7 12:25:18.085: INFO: namespace e2e-tests-projected-2clrk deletion completed in 24.296444695s

• [SLOW TEST:39.350 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:25:18.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-e6f639f3-49a4-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:25:18.791: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-gkwhc" to be "success or failure"
Feb  7 12:25:18.800: INFO: Pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.401401ms
Feb  7 12:25:21.295: INFO: Pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504309648s
Feb  7 12:25:23.312: INFO: Pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521269556s
Feb  7 12:25:25.350: INFO: Pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559404112s
Feb  7 12:25:27.362: INFO: Pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.571578278s
Feb  7 12:25:29.402: INFO: Pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.611488805s
STEP: Saw pod success
Feb  7 12:25:29.402: INFO: Pod "pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:25:29.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 12:25:30.109: INFO: Waiting for pod pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005 to disappear
Feb  7 12:25:30.329: INFO: Pod pod-projected-secrets-e704804c-49a4-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:25:30.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gkwhc" for this suite.
Feb  7 12:25:36.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:25:36.549: INFO: namespace: e2e-tests-projected-gkwhc, resource: bindings, ignored listing per whitelist
Feb  7 12:25:36.741: INFO: namespace e2e-tests-projected-gkwhc deletion completed in 6.374087127s

• [SLOW TEST:18.656 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:25:36.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb  7 12:25:37.044: INFO: Waiting up to 5m0s for pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005" in namespace "e2e-tests-var-expansion-w94mz" to be "success or failure"
Feb  7 12:25:37.059: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.715389ms
Feb  7 12:25:39.073: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02954816s
Feb  7 12:25:41.128: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083927218s
Feb  7 12:25:43.143: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099233017s
Feb  7 12:25:45.461: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417392602s
Feb  7 12:25:47.656: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.612520332s
Feb  7 12:25:49.881: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.836928156s
STEP: Saw pod success
Feb  7 12:25:49.881: INFO: Pod "var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:25:49.897: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  7 12:25:50.061: INFO: Waiting for pod var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005 to disappear
Feb  7 12:25:50.070: INFO: Pod var-expansion-f1e043ac-49a4-11ea-abae-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:25:50.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-w94mz" for this suite.
Feb  7 12:25:56.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:25:56.195: INFO: namespace: e2e-tests-var-expansion-w94mz, resource: bindings, ignored listing per whitelist
Feb  7 12:25:56.329: INFO: namespace e2e-tests-var-expansion-w94mz deletion completed in 6.250589391s

• [SLOW TEST:19.588 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:25:56.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:26:56.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fvrgm" for this suite.
Feb  7 12:27:20.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:27:20.907: INFO: namespace: e2e-tests-container-probe-fvrgm, resource: bindings, ignored listing per whitelist
Feb  7 12:27:20.923: INFO: namespace e2e-tests-container-probe-fvrgm deletion completed in 24.203993322s

• [SLOW TEST:84.594 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:27:20.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-3006ff20-49a5-11ea-abae-0242ac110005
STEP: Creating secret with name s-test-opt-upd-3006ffd0-49a5-11ea-abae-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3006ff20-49a5-11ea-abae-0242ac110005
STEP: Updating secret s-test-opt-upd-3006ffd0-49a5-11ea-abae-0242ac110005
STEP: Creating secret with name s-test-opt-create-3006fff4-49a5-11ea-abae-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:28:59.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wlpf6" for this suite.
Feb  7 12:29:40.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:29:40.155: INFO: namespace: e2e-tests-secrets-wlpf6, resource: bindings, ignored listing per whitelist
Feb  7 12:29:40.239: INFO: namespace e2e-tests-secrets-wlpf6 deletion completed in 40.364035013s

• [SLOW TEST:139.315 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:29:40.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-ggcpv
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 12:29:40.725: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 12:30:23.066: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-ggcpv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 12:30:23.066: INFO: >>> kubeConfig: /root/.kube/config
I0207 12:30:23.135937       9 log.go:172] (0xc001b96370) (0xc0021c83c0) Create stream
I0207 12:30:23.136039       9 log.go:172] (0xc001b96370) (0xc0021c83c0) Stream added, broadcasting: 1
I0207 12:30:23.146052       9 log.go:172] (0xc001b96370) Reply frame received for 1
I0207 12:30:23.146100       9 log.go:172] (0xc001b96370) (0xc000882460) Create stream
I0207 12:30:23.146116       9 log.go:172] (0xc001b96370) (0xc000882460) Stream added, broadcasting: 3
I0207 12:30:23.149858       9 log.go:172] (0xc001b96370) Reply frame received for 3
I0207 12:30:23.149910       9 log.go:172] (0xc001b96370) (0xc0021c8460) Create stream
I0207 12:30:23.149917       9 log.go:172] (0xc001b96370) (0xc0021c8460) Stream added, broadcasting: 5
I0207 12:30:23.150907       9 log.go:172] (0xc001b96370) Reply frame received for 5
I0207 12:30:23.290519       9 log.go:172] (0xc001b96370) Data frame received for 3
I0207 12:30:23.290568       9 log.go:172] (0xc000882460) (3) Data frame handling
I0207 12:30:23.290582       9 log.go:172] (0xc000882460) (3) Data frame sent
I0207 12:30:23.402951       9 log.go:172] (0xc001b96370) (0xc0021c8460) Stream removed, broadcasting: 5
I0207 12:30:23.403010       9 log.go:172] (0xc001b96370) Data frame received for 1
I0207 12:30:23.403020       9 log.go:172] (0xc0021c83c0) (1) Data frame handling
I0207 12:30:23.403031       9 log.go:172] (0xc0021c83c0) (1) Data frame sent
I0207 12:30:23.403062       9 log.go:172] (0xc001b96370) (0xc000882460) Stream removed, broadcasting: 3
I0207 12:30:23.403104       9 log.go:172] (0xc001b96370) (0xc0021c83c0) Stream removed, broadcasting: 1
I0207 12:30:23.403151       9 log.go:172] (0xc001b96370) Go away received
I0207 12:30:23.403281       9 log.go:172] (0xc001b96370) (0xc0021c83c0) Stream removed, broadcasting: 1
I0207 12:30:23.403298       9 log.go:172] (0xc001b96370) (0xc000882460) Stream removed, broadcasting: 3
I0207 12:30:23.403304       9 log.go:172] (0xc001b96370) (0xc0021c8460) Stream removed, broadcasting: 5
Feb  7 12:30:23.403: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:30:23.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-ggcpv" for this suite.
Feb  7 12:30:47.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:30:47.604: INFO: namespace: e2e-tests-pod-network-test-ggcpv, resource: bindings, ignored listing per whitelist
Feb  7 12:30:47.666: INFO: namespace e2e-tests-pod-network-test-ggcpv deletion completed in 24.247737664s

• [SLOW TEST:67.426 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:30:47.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ab2ad825-49a5-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 12:30:47.927: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-qk86q" to be "success or failure"
Feb  7 12:30:47.937: INFO: Pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.214418ms
Feb  7 12:30:49.960: INFO: Pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033606653s
Feb  7 12:30:51.987: INFO: Pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059781783s
Feb  7 12:30:54.099: INFO: Pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172617366s
Feb  7 12:30:56.374: INFO: Pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447666268s
Feb  7 12:30:58.391: INFO: Pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.463791478s
STEP: Saw pod success
Feb  7 12:30:58.391: INFO: Pod "pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:30:58.396: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  7 12:30:59.094: INFO: Waiting for pod pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005 to disappear
Feb  7 12:30:59.104: INFO: Pod pod-projected-configmaps-ab2b7be1-49a5-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:30:59.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qk86q" for this suite.
Feb  7 12:31:05.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:31:05.147: INFO: namespace: e2e-tests-projected-qk86q, resource: bindings, ignored listing per whitelist
Feb  7 12:31:05.268: INFO: namespace e2e-tests-projected-qk86q deletion completed in 6.158327402s

• [SLOW TEST:17.602 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:31:05.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b5ac5965-49a5-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 12:31:05.480: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-g88rg" to be "success or failure"
Feb  7 12:31:05.485: INFO: Pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.754704ms
Feb  7 12:31:07.498: INFO: Pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017387673s
Feb  7 12:31:09.515: INFO: Pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034972306s
Feb  7 12:31:12.350: INFO: Pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.869196919s
Feb  7 12:31:14.362: INFO: Pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.882116443s
Feb  7 12:31:17.093: INFO: Pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.612759525s
STEP: Saw pod success
Feb  7 12:31:17.093: INFO: Pod "pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:31:17.105: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  7 12:31:18.114: INFO: Waiting for pod pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005 to disappear
Feb  7 12:31:18.348: INFO: Pod pod-configmaps-b5ad430c-49a5-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:31:18.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-g88rg" for this suite.
Feb  7 12:31:24.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:31:24.533: INFO: namespace: e2e-tests-configmap-g88rg, resource: bindings, ignored listing per whitelist
Feb  7 12:31:24.702: INFO: namespace e2e-tests-configmap-g88rg deletion completed in 6.319446999s

• [SLOW TEST:19.434 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:31:24.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  7 12:31:24.864: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:31:41.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-mxmvk" for this suite.
Feb  7 12:31:47.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:31:48.007: INFO: namespace: e2e-tests-init-container-mxmvk, resource: bindings, ignored listing per whitelist
Feb  7 12:31:48.041: INFO: namespace e2e-tests-init-container-mxmvk deletion completed in 6.222515478s

• [SLOW TEST:23.340 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:31:48.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 12:31:48.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-nbb82" to be "success or failure"
Feb  7 12:31:48.596: INFO: Pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 199.170875ms
Feb  7 12:31:50.614: INFO: Pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217188258s
Feb  7 12:31:52.626: INFO: Pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229706535s
Feb  7 12:31:54.687: INFO: Pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.290666997s
Feb  7 12:31:56.711: INFO: Pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314274383s
Feb  7 12:31:59.037: INFO: Pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639874265s
STEP: Saw pod success
Feb  7 12:31:59.037: INFO: Pod "downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:31:59.065: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 12:31:59.393: INFO: Waiting for pod downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005 to disappear
Feb  7 12:31:59.443: INFO: Pod downwardapi-volume-cf39a557-49a5-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:31:59.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nbb82" for this suite.
Feb  7 12:32:05.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:32:05.635: INFO: namespace: e2e-tests-projected-nbb82, resource: bindings, ignored listing per whitelist
Feb  7 12:32:05.857: INFO: namespace e2e-tests-projected-nbb82 deletion completed in 6.313737956s

• [SLOW TEST:17.815 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:32:05.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:32:12.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-v7zf5" for this suite.
Feb  7 12:32:18.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:32:18.876: INFO: namespace: e2e-tests-namespaces-v7zf5, resource: bindings, ignored listing per whitelist
Feb  7 12:32:18.917: INFO: namespace e2e-tests-namespaces-v7zf5 deletion completed in 6.166167139s
STEP: Destroying namespace "e2e-tests-nsdeletetest-l27xm" for this suite.
Feb  7 12:32:18.921: INFO: Namespace e2e-tests-nsdeletetest-l27xm was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-vjbvp" for this suite.
Feb  7 12:32:25.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:32:25.095: INFO: namespace: e2e-tests-nsdeletetest-vjbvp, resource: bindings, ignored listing per whitelist
Feb  7 12:32:25.169: INFO: namespace e2e-tests-nsdeletetest-vjbvp deletion completed in 6.248037582s

• [SLOW TEST:19.311 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:32:25.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  7 12:32:25.588: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  7 12:32:25.630: INFO: Waiting for terminating namespaces to be deleted...
Feb  7 12:32:25.636: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  7 12:32:25.684: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  7 12:32:25.685: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  7 12:32:25.685: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  7 12:32:25.685: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  7 12:32:25.685: INFO: 	Container weave ready: true, restart count 0
Feb  7 12:32:25.685: INFO: 	Container weave-npc ready: true, restart count 0
Feb  7 12:32:25.685: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  7 12:32:25.685: INFO: 	Container coredns ready: true, restart count 0
Feb  7 12:32:25.685: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  7 12:32:25.685: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  7 12:32:25.685: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  7 12:32:25.685: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  7 12:32:25.685: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ebbcf4f2-49a5-11ea-abae-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-ebbcf4f2-49a5-11ea-abae-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ebbcf4f2-49a5-11ea-abae-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:32:48.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-gwfbv" for this suite.
Feb  7 12:33:06.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:33:06.851: INFO: namespace: e2e-tests-sched-pred-gwfbv, resource: bindings, ignored listing per whitelist
Feb  7 12:33:06.949: INFO: namespace e2e-tests-sched-pred-gwfbv deletion completed in 18.365246635s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:41.780 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:33:06.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:34:11.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-5v8f4" for this suite.
Feb  7 12:34:17.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:34:17.629: INFO: namespace: e2e-tests-container-runtime-5v8f4, resource: bindings, ignored listing per whitelist
Feb  7 12:34:17.649: INFO: namespace e2e-tests-container-runtime-5v8f4 deletion completed in 6.541858751s

• [SLOW TEST:70.700 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:34:17.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb  7 12:34:17.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  7 12:34:19.792: INFO: stderr: ""
Feb  7 12:34:19.792: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:34:19.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wkmvh" for this suite.
Feb  7 12:34:25.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:34:26.166: INFO: namespace: e2e-tests-kubectl-wkmvh, resource: bindings, ignored listing per whitelist
Feb  7 12:34:26.234: INFO: namespace e2e-tests-kubectl-wkmvh deletion completed in 6.421973214s

• [SLOW TEST:8.584 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:34:26.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005
Feb  7 12:34:26.739: INFO: Pod name my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005: Found 0 pods out of 1
Feb  7 12:34:31.768: INFO: Pod name my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005: Found 1 pods out of 1
Feb  7 12:34:31.768: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005" are running
Feb  7 12:34:37.785: INFO: Pod "my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005-7ldjj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:34:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:34:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:34:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-07 12:34:26 +0000 UTC Reason: Message:}])
Feb  7 12:34:37.785: INFO: Trying to dial the pod
Feb  7 12:34:42.815: INFO: Controller my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005: Got expected result from replica 1 [my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005-7ldjj]: "my-hostname-basic-2d9f5820-49a6-11ea-abae-0242ac110005-7ldjj", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:34:42.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-njrmh" for this suite.
Feb  7 12:34:48.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:34:49.089: INFO: namespace: e2e-tests-replication-controller-njrmh, resource: bindings, ignored listing per whitelist
Feb  7 12:34:49.134: INFO: namespace e2e-tests-replication-controller-njrmh deletion completed in 6.312613878s

• [SLOW TEST:22.900 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:34:49.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-c8w4v
I0207 12:34:49.359475       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-c8w4v, replica count: 1
I0207 12:34:50.410478       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:51.410891       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:52.411229       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:53.411538       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:54.411855       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:55.412332       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:56.412625       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:57.412980       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:58.413412       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:34:59.413782       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:35:00.414179       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0207 12:35:01.414692       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  7 12:35:01.631: INFO: Created: latency-svc-rgvxd
Feb  7 12:35:01.669: INFO: Got endpoints: latency-svc-rgvxd [154.054631ms]
Feb  7 12:35:01.891: INFO: Created: latency-svc-sx4d2
Feb  7 12:35:01.899: INFO: Got endpoints: latency-svc-sx4d2 [229.554992ms]
Feb  7 12:35:02.084: INFO: Created: latency-svc-w2tfd
Feb  7 12:35:02.112: INFO: Got endpoints: latency-svc-w2tfd [443.282972ms]
Feb  7 12:35:02.157: INFO: Created: latency-svc-dt9sf
Feb  7 12:35:02.430: INFO: Got endpoints: latency-svc-dt9sf [761.261242ms]
Feb  7 12:35:02.492: INFO: Created: latency-svc-qsz2v
Feb  7 12:35:02.736: INFO: Got endpoints: latency-svc-qsz2v [1.067082645s]
Feb  7 12:35:02.771: INFO: Created: latency-svc-ktvjb
Feb  7 12:35:02.779: INFO: Got endpoints: latency-svc-ktvjb [1.109445094s]
Feb  7 12:35:02.996: INFO: Created: latency-svc-x4ljh
Feb  7 12:35:03.236: INFO: Got endpoints: latency-svc-x4ljh [499.564192ms]
Feb  7 12:35:03.258: INFO: Created: latency-svc-7x2wb
Feb  7 12:35:03.289: INFO: Got endpoints: latency-svc-7x2wb [1.619315946s]
Feb  7 12:35:03.492: INFO: Created: latency-svc-rjprv
Feb  7 12:35:03.492: INFO: Got endpoints: latency-svc-rjprv [1.822865813s]
Feb  7 12:35:03.656: INFO: Created: latency-svc-ns4rn
Feb  7 12:35:03.874: INFO: Got endpoints: latency-svc-ns4rn [2.20425177s]
Feb  7 12:35:03.905: INFO: Created: latency-svc-8w7d7
Feb  7 12:35:04.090: INFO: Got endpoints: latency-svc-8w7d7 [2.420129999s]
Feb  7 12:35:04.099: INFO: Created: latency-svc-zsz79
Feb  7 12:35:04.282: INFO: Got endpoints: latency-svc-zsz79 [2.612718406s]
Feb  7 12:35:04.289: INFO: Created: latency-svc-mlhmn
Feb  7 12:35:04.355: INFO: Got endpoints: latency-svc-mlhmn [2.685300065s]
Feb  7 12:35:04.554: INFO: Created: latency-svc-gsj2k
Feb  7 12:35:04.727: INFO: Got endpoints: latency-svc-gsj2k [3.056958815s]
Feb  7 12:35:04.735: INFO: Created: latency-svc-jqsxt
Feb  7 12:35:04.752: INFO: Got endpoints: latency-svc-jqsxt [3.082447615s]
Feb  7 12:35:04.808: INFO: Created: latency-svc-5hcdv
Feb  7 12:35:05.007: INFO: Got endpoints: latency-svc-5hcdv [3.338297557s]
Feb  7 12:35:05.020: INFO: Created: latency-svc-z9bh8
Feb  7 12:35:05.034: INFO: Got endpoints: latency-svc-z9bh8 [3.364128169s]
Feb  7 12:35:05.117: INFO: Created: latency-svc-vp2rq
Feb  7 12:35:05.259: INFO: Created: latency-svc-nlxn9
Feb  7 12:35:05.272: INFO: Got endpoints: latency-svc-vp2rq [3.372846635s]
Feb  7 12:35:05.311: INFO: Got endpoints: latency-svc-nlxn9 [3.198356913s]
Feb  7 12:35:05.512: INFO: Created: latency-svc-bpjw2
Feb  7 12:35:05.542: INFO: Got endpoints: latency-svc-bpjw2 [3.111455779s]
Feb  7 12:35:05.592: INFO: Created: latency-svc-wlxt4
Feb  7 12:35:05.708: INFO: Got endpoints: latency-svc-wlxt4 [2.929414179s]
Feb  7 12:35:05.744: INFO: Created: latency-svc-rhlfx
Feb  7 12:35:06.029: INFO: Got endpoints: latency-svc-rhlfx [2.792939017s]
Feb  7 12:35:06.031: INFO: Created: latency-svc-8sczm
Feb  7 12:35:06.065: INFO: Got endpoints: latency-svc-8sczm [2.776194238s]
Feb  7 12:35:06.270: INFO: Created: latency-svc-7cjm8
Feb  7 12:35:06.270: INFO: Got endpoints: latency-svc-7cjm8 [2.777987431s]
Feb  7 12:35:06.405: INFO: Created: latency-svc-d2zgk
Feb  7 12:35:06.462: INFO: Got endpoints: latency-svc-d2zgk [2.588040517s]
Feb  7 12:35:06.640: INFO: Created: latency-svc-5w92n
Feb  7 12:35:06.676: INFO: Got endpoints: latency-svc-5w92n [2.586392006s]
Feb  7 12:35:06.817: INFO: Created: latency-svc-75pbv
Feb  7 12:35:06.836: INFO: Got endpoints: latency-svc-75pbv [2.553792858s]
Feb  7 12:35:07.073: INFO: Created: latency-svc-k9msf
Feb  7 12:35:07.095: INFO: Got endpoints: latency-svc-k9msf [2.73997662s]
Feb  7 12:35:07.258: INFO: Created: latency-svc-wj7zj
Feb  7 12:35:07.267: INFO: Got endpoints: latency-svc-wj7zj [2.539906648s]
Feb  7 12:35:07.311: INFO: Created: latency-svc-286rn
Feb  7 12:35:07.449: INFO: Got endpoints: latency-svc-286rn [2.696975381s]
Feb  7 12:35:07.486: INFO: Created: latency-svc-7f6xt
Feb  7 12:35:07.677: INFO: Created: latency-svc-2tchh
Feb  7 12:35:07.680: INFO: Got endpoints: latency-svc-7f6xt [2.672769416s]
Feb  7 12:35:07.706: INFO: Got endpoints: latency-svc-2tchh [2.671908942s]
Feb  7 12:35:07.936: INFO: Created: latency-svc-s2qcb
Feb  7 12:35:07.968: INFO: Got endpoints: latency-svc-s2qcb [2.695774972s]
Feb  7 12:35:08.106: INFO: Created: latency-svc-rkqk9
Feb  7 12:35:08.115: INFO: Got endpoints: latency-svc-rkqk9 [2.804181965s]
Feb  7 12:35:08.123: INFO: Created: latency-svc-5bch4
Feb  7 12:35:08.324: INFO: Created: latency-svc-fxftf
Feb  7 12:35:08.352: INFO: Got endpoints: latency-svc-5bch4 [2.810146795s]
Feb  7 12:35:08.357: INFO: Got endpoints: latency-svc-fxftf [2.648609529s]
Feb  7 12:35:08.407: INFO: Created: latency-svc-hdmrb
Feb  7 12:35:08.500: INFO: Got endpoints: latency-svc-hdmrb [2.471177323s]
Feb  7 12:35:08.550: INFO: Created: latency-svc-hznzx
Feb  7 12:35:08.600: INFO: Got endpoints: latency-svc-hznzx [2.534325779s]
Feb  7 12:35:08.747: INFO: Created: latency-svc-w5ltq
Feb  7 12:35:08.760: INFO: Got endpoints: latency-svc-w5ltq [2.489033098s]
Feb  7 12:35:08.809: INFO: Created: latency-svc-cx4gt
Feb  7 12:35:08.968: INFO: Got endpoints: latency-svc-cx4gt [2.505609468s]
Feb  7 12:35:09.000: INFO: Created: latency-svc-6q4hb
Feb  7 12:35:09.021: INFO: Got endpoints: latency-svc-6q4hb [2.344437455s]
Feb  7 12:35:09.259: INFO: Created: latency-svc-7cnvb
Feb  7 12:35:09.280: INFO: Got endpoints: latency-svc-7cnvb [2.443134937s]
Feb  7 12:35:09.493: INFO: Created: latency-svc-pljhs
Feb  7 12:35:09.506: INFO: Got endpoints: latency-svc-pljhs [2.41105024s]
Feb  7 12:35:09.557: INFO: Created: latency-svc-87vrr
Feb  7 12:35:09.693: INFO: Got endpoints: latency-svc-87vrr [2.425803598s]
Feb  7 12:35:09.726: INFO: Created: latency-svc-cd92m
Feb  7 12:35:09.742: INFO: Got endpoints: latency-svc-cd92m [2.293119266s]
Feb  7 12:35:09.909: INFO: Created: latency-svc-mlr5v
Feb  7 12:35:09.939: INFO: Got endpoints: latency-svc-mlr5v [2.259201612s]
Feb  7 12:35:10.113: INFO: Created: latency-svc-wnlkh
Feb  7 12:35:10.150: INFO: Got endpoints: latency-svc-wnlkh [2.443773999s]
Feb  7 12:35:10.217: INFO: Created: latency-svc-v5dtm
Feb  7 12:35:10.322: INFO: Got endpoints: latency-svc-v5dtm [2.35419243s]
Feb  7 12:35:10.367: INFO: Created: latency-svc-qsq5c
Feb  7 12:35:10.561: INFO: Got endpoints: latency-svc-qsq5c [2.446077099s]
Feb  7 12:35:10.619: INFO: Created: latency-svc-57htp
Feb  7 12:35:10.629: INFO: Got endpoints: latency-svc-57htp [2.276827108s]
Feb  7 12:35:10.869: INFO: Created: latency-svc-4c99f
Feb  7 12:35:10.888: INFO: Got endpoints: latency-svc-4c99f [2.530741448s]
Feb  7 12:35:11.089: INFO: Created: latency-svc-5b5b7
Feb  7 12:35:11.105: INFO: Got endpoints: latency-svc-5b5b7 [2.604706909s]
Feb  7 12:35:11.267: INFO: Created: latency-svc-b28jb
Feb  7 12:35:11.299: INFO: Got endpoints: latency-svc-b28jb [2.698697307s]
Feb  7 12:35:11.304: INFO: Created: latency-svc-zpzgk
Feb  7 12:35:11.328: INFO: Got endpoints: latency-svc-zpzgk [2.568376088s]
Feb  7 12:35:11.482: INFO: Created: latency-svc-lgw9t
Feb  7 12:35:11.516: INFO: Got endpoints: latency-svc-lgw9t [2.547823254s]
Feb  7 12:35:11.692: INFO: Created: latency-svc-xqr52
Feb  7 12:35:11.728: INFO: Got endpoints: latency-svc-xqr52 [2.707121375s]
Feb  7 12:35:11.885: INFO: Created: latency-svc-45mtj
Feb  7 12:35:11.935: INFO: Got endpoints: latency-svc-45mtj [2.655629549s]
Feb  7 12:35:12.093: INFO: Created: latency-svc-48jlg
Feb  7 12:35:12.107: INFO: Got endpoints: latency-svc-48jlg [2.601023473s]
Feb  7 12:35:12.171: INFO: Created: latency-svc-x7mq4
Feb  7 12:35:12.291: INFO: Got endpoints: latency-svc-x7mq4 [2.598674941s]
Feb  7 12:35:12.333: INFO: Created: latency-svc-hbxll
Feb  7 12:35:12.375: INFO: Got endpoints: latency-svc-hbxll [2.632921706s]
Feb  7 12:35:12.550: INFO: Created: latency-svc-tmfk6
Feb  7 12:35:12.550: INFO: Got endpoints: latency-svc-tmfk6 [2.610509815s]
Feb  7 12:35:12.713: INFO: Created: latency-svc-jq9lg
Feb  7 12:35:12.728: INFO: Got endpoints: latency-svc-jq9lg [2.578123429s]
Feb  7 12:35:12.969: INFO: Created: latency-svc-cqfqh
Feb  7 12:35:12.975: INFO: Got endpoints: latency-svc-cqfqh [2.65226854s]
Feb  7 12:35:13.169: INFO: Created: latency-svc-8zkzs
Feb  7 12:35:13.185: INFO: Got endpoints: latency-svc-8zkzs [2.623486719s]
Feb  7 12:35:13.291: INFO: Created: latency-svc-j4nqc
Feb  7 12:35:13.390: INFO: Got endpoints: latency-svc-j4nqc [2.760653363s]
Feb  7 12:35:13.444: INFO: Created: latency-svc-v8gpt
Feb  7 12:35:13.460: INFO: Got endpoints: latency-svc-v8gpt [2.571279164s]
Feb  7 12:35:13.678: INFO: Created: latency-svc-d5ksn
Feb  7 12:35:13.699: INFO: Got endpoints: latency-svc-d5ksn [2.593359058s]
Feb  7 12:35:13.937: INFO: Created: latency-svc-k6k7k
Feb  7 12:35:13.963: INFO: Got endpoints: latency-svc-k6k7k [2.663982368s]
Feb  7 12:35:14.193: INFO: Created: latency-svc-m7pwf
Feb  7 12:35:14.223: INFO: Got endpoints: latency-svc-m7pwf [2.895355086s]
Feb  7 12:35:14.375: INFO: Created: latency-svc-fgw4k
Feb  7 12:35:14.415: INFO: Got endpoints: latency-svc-fgw4k [2.899476315s]
Feb  7 12:35:14.568: INFO: Created: latency-svc-pmtd5
Feb  7 12:35:14.633: INFO: Got endpoints: latency-svc-pmtd5 [2.90420376s]
Feb  7 12:35:14.753: INFO: Created: latency-svc-w9n2w
Feb  7 12:35:14.778: INFO: Got endpoints: latency-svc-w9n2w [2.842180063s]
Feb  7 12:35:14.829: INFO: Created: latency-svc-wt54z
Feb  7 12:35:14.930: INFO: Got endpoints: latency-svc-wt54z [2.822383869s]
Feb  7 12:35:14.979: INFO: Created: latency-svc-jd5wb
Feb  7 12:35:14.981: INFO: Got endpoints: latency-svc-jd5wb [2.688861599s]
Feb  7 12:35:15.125: INFO: Created: latency-svc-rrszg
Feb  7 12:35:15.134: INFO: Got endpoints: latency-svc-rrszg [2.758952745s]
Feb  7 12:35:15.235: INFO: Created: latency-svc-rrt42
Feb  7 12:35:15.246: INFO: Got endpoints: latency-svc-rrt42 [2.695341595s]
Feb  7 12:35:15.482: INFO: Created: latency-svc-27rx8
Feb  7 12:35:15.609: INFO: Got endpoints: latency-svc-27rx8 [2.880921327s]
Feb  7 12:35:15.643: INFO: Created: latency-svc-jk7g4
Feb  7 12:35:15.660: INFO: Got endpoints: latency-svc-jk7g4 [2.684948727s]
Feb  7 12:35:15.837: INFO: Created: latency-svc-sgch7
Feb  7 12:35:15.907: INFO: Got endpoints: latency-svc-sgch7 [2.721774724s]
Feb  7 12:35:16.100: INFO: Created: latency-svc-n7gvk
Feb  7 12:35:16.121: INFO: Got endpoints: latency-svc-n7gvk [2.730789735s]
Feb  7 12:35:16.675: INFO: Created: latency-svc-6zpt8
Feb  7 12:35:16.953: INFO: Got endpoints: latency-svc-6zpt8 [3.493302708s]
Feb  7 12:35:17.034: INFO: Created: latency-svc-kgpkx
Feb  7 12:35:17.262: INFO: Got endpoints: latency-svc-kgpkx [3.562917752s]
Feb  7 12:35:17.268: INFO: Created: latency-svc-bgnjv
Feb  7 12:35:17.291: INFO: Got endpoints: latency-svc-bgnjv [3.328299156s]
Feb  7 12:35:17.468: INFO: Created: latency-svc-dfs2p
Feb  7 12:35:17.485: INFO: Got endpoints: latency-svc-dfs2p [3.261153992s]
Feb  7 12:35:17.654: INFO: Created: latency-svc-jskfg
Feb  7 12:35:17.687: INFO: Got endpoints: latency-svc-jskfg [3.270994317s]
Feb  7 12:35:17.733: INFO: Created: latency-svc-wlzqd
Feb  7 12:35:17.843: INFO: Got endpoints: latency-svc-wlzqd [3.209641963s]
Feb  7 12:35:17.979: INFO: Created: latency-svc-zs4hn
Feb  7 12:35:18.117: INFO: Got endpoints: latency-svc-zs4hn [3.339493416s]
Feb  7 12:35:18.202: INFO: Created: latency-svc-8rzpl
Feb  7 12:35:18.383: INFO: Got endpoints: latency-svc-8rzpl [3.452605354s]
Feb  7 12:35:18.406: INFO: Created: latency-svc-b6tdz
Feb  7 12:35:18.435: INFO: Got endpoints: latency-svc-b6tdz [3.453943881s]
Feb  7 12:35:18.637: INFO: Created: latency-svc-pjxnn
Feb  7 12:35:18.642: INFO: Got endpoints: latency-svc-pjxnn [3.507671781s]
Feb  7 12:35:18.897: INFO: Created: latency-svc-vpkhh
Feb  7 12:35:18.924: INFO: Got endpoints: latency-svc-vpkhh [3.678486207s]
Feb  7 12:35:19.145: INFO: Created: latency-svc-c68w2
Feb  7 12:35:19.170: INFO: Got endpoints: latency-svc-c68w2 [3.560636095s]
Feb  7 12:35:19.241: INFO: Created: latency-svc-sx8km
Feb  7 12:35:19.482: INFO: Got endpoints: latency-svc-sx8km [3.821789544s]
Feb  7 12:35:19.513: INFO: Created: latency-svc-tfgt4
Feb  7 12:35:19.700: INFO: Got endpoints: latency-svc-tfgt4 [3.793226528s]
Feb  7 12:35:19.737: INFO: Created: latency-svc-q5hls
Feb  7 12:35:19.739: INFO: Got endpoints: latency-svc-q5hls [3.617986885s]
Feb  7 12:35:19.894: INFO: Created: latency-svc-s72jj
Feb  7 12:35:19.934: INFO: Got endpoints: latency-svc-s72jj [2.980812234s]
Feb  7 12:35:20.075: INFO: Created: latency-svc-l5xvf
Feb  7 12:35:20.095: INFO: Got endpoints: latency-svc-l5xvf [2.833683753s]
Feb  7 12:35:20.156: INFO: Created: latency-svc-rjbvr
Feb  7 12:35:20.289: INFO: Got endpoints: latency-svc-rjbvr [2.997289227s]
Feb  7 12:35:20.313: INFO: Created: latency-svc-bj5sn
Feb  7 12:35:20.355: INFO: Got endpoints: latency-svc-bj5sn [2.870458548s]
Feb  7 12:35:20.528: INFO: Created: latency-svc-s9qgr
Feb  7 12:35:20.563: INFO: Got endpoints: latency-svc-s9qgr [2.876568985s]
Feb  7 12:35:20.743: INFO: Created: latency-svc-qhgfr
Feb  7 12:35:20.752: INFO: Got endpoints: latency-svc-qhgfr [2.909175335s]
Feb  7 12:35:20.915: INFO: Created: latency-svc-g4phq
Feb  7 12:35:20.931: INFO: Created: latency-svc-h5jwg
Feb  7 12:35:20.947: INFO: Got endpoints: latency-svc-g4phq [2.829830068s]
Feb  7 12:35:20.971: INFO: Got endpoints: latency-svc-h5jwg [2.588556749s]
Feb  7 12:35:21.027: INFO: Created: latency-svc-qjllx
Feb  7 12:35:21.127: INFO: Got endpoints: latency-svc-qjllx [2.692477235s]
Feb  7 12:35:21.175: INFO: Created: latency-svc-28f9l
Feb  7 12:35:21.180: INFO: Got endpoints: latency-svc-28f9l [2.537353973s]
Feb  7 12:35:21.331: INFO: Created: latency-svc-hlrsk
Feb  7 12:35:21.358: INFO: Got endpoints: latency-svc-hlrsk [2.433398465s]
Feb  7 12:35:21.522: INFO: Created: latency-svc-jxbcm
Feb  7 12:35:21.544: INFO: Got endpoints: latency-svc-jxbcm [2.374315823s]
Feb  7 12:35:21.628: INFO: Created: latency-svc-74wrv
Feb  7 12:35:21.748: INFO: Got endpoints: latency-svc-74wrv [2.265411803s]
Feb  7 12:35:21.991: INFO: Created: latency-svc-mbmlz
Feb  7 12:35:22.199: INFO: Created: latency-svc-x6mvv
Feb  7 12:35:22.213: INFO: Got endpoints: latency-svc-mbmlz [2.511615678s]
Feb  7 12:35:22.356: INFO: Got endpoints: latency-svc-x6mvv [2.616826925s]
Feb  7 12:35:22.383: INFO: Created: latency-svc-xvlv9
Feb  7 12:35:22.423: INFO: Got endpoints: latency-svc-xvlv9 [2.48860113s]
Feb  7 12:35:22.623: INFO: Created: latency-svc-p6jtm
Feb  7 12:35:22.644: INFO: Got endpoints: latency-svc-p6jtm [2.548827647s]
Feb  7 12:35:22.814: INFO: Created: latency-svc-2chrq
Feb  7 12:35:22.834: INFO: Got endpoints: latency-svc-2chrq [2.545456875s]
Feb  7 12:35:22.884: INFO: Created: latency-svc-zw247
Feb  7 12:35:22.901: INFO: Got endpoints: latency-svc-zw247 [2.545177298s]
Feb  7 12:35:23.136: INFO: Created: latency-svc-xlqvd
Feb  7 12:35:23.167: INFO: Got endpoints: latency-svc-xlqvd [2.603321524s]
Feb  7 12:35:23.379: INFO: Created: latency-svc-mpbn9
Feb  7 12:35:23.391: INFO: Got endpoints: latency-svc-mpbn9 [2.639114919s]
Feb  7 12:35:23.611: INFO: Created: latency-svc-rrnqg
Feb  7 12:35:23.647: INFO: Got endpoints: latency-svc-rrnqg [2.699060898s]
Feb  7 12:35:23.938: INFO: Created: latency-svc-52gzb
Feb  7 12:35:23.938: INFO: Got endpoints: latency-svc-52gzb [2.966177082s]
Feb  7 12:35:24.109: INFO: Created: latency-svc-v8hqt
Feb  7 12:35:24.129: INFO: Got endpoints: latency-svc-v8hqt [3.000976553s]
Feb  7 12:35:24.166: INFO: Created: latency-svc-rjskc
Feb  7 12:35:24.183: INFO: Got endpoints: latency-svc-rjskc [3.00345407s]
Feb  7 12:35:24.372: INFO: Created: latency-svc-pxw5k
Feb  7 12:35:24.394: INFO: Got endpoints: latency-svc-pxw5k [3.036503681s]
Feb  7 12:35:24.611: INFO: Created: latency-svc-fjgsg
Feb  7 12:35:24.660: INFO: Got endpoints: latency-svc-fjgsg [3.116219602s]
Feb  7 12:35:24.836: INFO: Created: latency-svc-794lg
Feb  7 12:35:24.836: INFO: Got endpoints: latency-svc-794lg [3.087960601s]
Feb  7 12:35:25.034: INFO: Created: latency-svc-mf486
Feb  7 12:35:25.049: INFO: Got endpoints: latency-svc-mf486 [2.836263547s]
Feb  7 12:35:25.242: INFO: Created: latency-svc-6hpg4
Feb  7 12:35:25.281: INFO: Got endpoints: latency-svc-6hpg4 [2.923807554s]
Feb  7 12:35:25.432: INFO: Created: latency-svc-528lm
Feb  7 12:35:25.441: INFO: Got endpoints: latency-svc-528lm [3.017375774s]
Feb  7 12:35:25.636: INFO: Created: latency-svc-7lpjl
Feb  7 12:35:25.649: INFO: Got endpoints: latency-svc-7lpjl [3.004950648s]
Feb  7 12:35:25.829: INFO: Created: latency-svc-gqp5t
Feb  7 12:35:25.864: INFO: Got endpoints: latency-svc-gqp5t [3.029919041s]
Feb  7 12:35:26.102: INFO: Created: latency-svc-6vvll
Feb  7 12:35:26.114: INFO: Got endpoints: latency-svc-6vvll [3.212952758s]
Feb  7 12:35:26.165: INFO: Created: latency-svc-hpc7j
Feb  7 12:35:26.276: INFO: Got endpoints: latency-svc-hpc7j [3.108987594s]
Feb  7 12:35:26.325: INFO: Created: latency-svc-5hslt
Feb  7 12:35:26.355: INFO: Got endpoints: latency-svc-5hslt [2.963737929s]
Feb  7 12:35:26.546: INFO: Created: latency-svc-fn7m8
Feb  7 12:35:26.611: INFO: Got endpoints: latency-svc-fn7m8 [2.964430856s]
Feb  7 12:35:26.722: INFO: Created: latency-svc-rmgz8
Feb  7 12:35:26.757: INFO: Got endpoints: latency-svc-rmgz8 [2.819055721s]
Feb  7 12:35:26.908: INFO: Created: latency-svc-5rz7m
Feb  7 12:35:26.992: INFO: Got endpoints: latency-svc-5rz7m [2.863584604s]
Feb  7 12:35:27.015: INFO: Created: latency-svc-v7hk5
Feb  7 12:35:27.159: INFO: Got endpoints: latency-svc-v7hk5 [2.976350939s]
Feb  7 12:35:27.201: INFO: Created: latency-svc-d4r9b
Feb  7 12:35:27.207: INFO: Got endpoints: latency-svc-d4r9b [2.812954481s]
Feb  7 12:35:27.377: INFO: Created: latency-svc-t7st2
Feb  7 12:35:27.386: INFO: Got endpoints: latency-svc-t7st2 [2.725464452s]
Feb  7 12:35:27.630: INFO: Created: latency-svc-kmr9f
Feb  7 12:35:27.632: INFO: Got endpoints: latency-svc-kmr9f [2.795833388s]
Feb  7 12:35:27.895: INFO: Created: latency-svc-h6b64
Feb  7 12:35:27.928: INFO: Got endpoints: latency-svc-h6b64 [2.879433644s]
Feb  7 12:35:28.173: INFO: Created: latency-svc-kn24n
Feb  7 12:35:28.195: INFO: Got endpoints: latency-svc-kn24n [2.913746745s]
Feb  7 12:35:28.374: INFO: Created: latency-svc-49mwk
Feb  7 12:35:28.392: INFO: Got endpoints: latency-svc-49mwk [2.950964729s]
Feb  7 12:35:28.552: INFO: Created: latency-svc-zxdcn
Feb  7 12:35:28.639: INFO: Created: latency-svc-bfm2s
Feb  7 12:35:28.639: INFO: Got endpoints: latency-svc-zxdcn [2.989265422s]
Feb  7 12:35:28.792: INFO: Got endpoints: latency-svc-bfm2s [2.92728162s]
Feb  7 12:35:28.837: INFO: Created: latency-svc-4xmdh
Feb  7 12:35:28.951: INFO: Got endpoints: latency-svc-4xmdh [2.836381697s]
Feb  7 12:35:28.986: INFO: Created: latency-svc-5qmq9
Feb  7 12:35:29.015: INFO: Got endpoints: latency-svc-5qmq9 [2.738137899s]
Feb  7 12:35:29.254: INFO: Created: latency-svc-tv4s9
Feb  7 12:35:29.272: INFO: Got endpoints: latency-svc-tv4s9 [2.916678486s]
Feb  7 12:35:29.788: INFO: Created: latency-svc-mrpr9
Feb  7 12:35:29.816: INFO: Got endpoints: latency-svc-mrpr9 [3.204370797s]
Feb  7 12:35:30.926: INFO: Created: latency-svc-v246t
Feb  7 12:35:30.971: INFO: Got endpoints: latency-svc-v246t [4.214366644s]
Feb  7 12:35:31.146: INFO: Created: latency-svc-zrtp8
Feb  7 12:35:31.204: INFO: Got endpoints: latency-svc-zrtp8 [4.211179712s]
Feb  7 12:35:31.392: INFO: Created: latency-svc-m7szg
Feb  7 12:35:31.455: INFO: Got endpoints: latency-svc-m7szg [4.294964153s]
Feb  7 12:35:31.614: INFO: Created: latency-svc-zrqk2
Feb  7 12:35:31.815: INFO: Got endpoints: latency-svc-zrqk2 [4.607304364s]
Feb  7 12:35:31.820: INFO: Created: latency-svc-lzqb7
Feb  7 12:35:31.859: INFO: Got endpoints: latency-svc-lzqb7 [4.472504002s]
Feb  7 12:35:32.146: INFO: Created: latency-svc-smnwz
Feb  7 12:35:32.195: INFO: Created: latency-svc-kprfg
Feb  7 12:35:32.335: INFO: Got endpoints: latency-svc-smnwz [4.702671333s]
Feb  7 12:35:32.346: INFO: Created: latency-svc-gqrpk
Feb  7 12:35:32.368: INFO: Got endpoints: latency-svc-kprfg [4.439085185s]
Feb  7 12:35:32.518: INFO: Got endpoints: latency-svc-gqrpk [4.322658469s]
Feb  7 12:35:32.549: INFO: Created: latency-svc-57v6r
Feb  7 12:35:32.595: INFO: Got endpoints: latency-svc-57v6r [4.203247643s]
Feb  7 12:35:32.780: INFO: Created: latency-svc-frgd6
Feb  7 12:35:32.807: INFO: Got endpoints: latency-svc-frgd6 [4.167438529s]
Feb  7 12:35:32.953: INFO: Created: latency-svc-zbcv7
Feb  7 12:35:32.953: INFO: Got endpoints: latency-svc-zbcv7 [4.16070918s]
Feb  7 12:35:33.188: INFO: Created: latency-svc-kxlqr
Feb  7 12:35:33.199: INFO: Got endpoints: latency-svc-kxlqr [4.248701057s]
Feb  7 12:35:33.267: INFO: Created: latency-svc-9xz6q
Feb  7 12:35:33.364: INFO: Got endpoints: latency-svc-9xz6q [4.349220014s]
Feb  7 12:35:33.396: INFO: Created: latency-svc-cqdkk
Feb  7 12:35:33.415: INFO: Got endpoints: latency-svc-cqdkk [4.143203926s]
Feb  7 12:35:33.472: INFO: Created: latency-svc-qdft8
Feb  7 12:35:33.636: INFO: Got endpoints: latency-svc-qdft8 [3.820254496s]
Feb  7 12:35:33.668: INFO: Created: latency-svc-p27pp
Feb  7 12:35:33.689: INFO: Got endpoints: latency-svc-p27pp [2.717202075s]
Feb  7 12:35:33.871: INFO: Created: latency-svc-9mtb4
Feb  7 12:35:33.886: INFO: Got endpoints: latency-svc-9mtb4 [2.681618057s]
Feb  7 12:35:34.206: INFO: Created: latency-svc-xpj8r
Feb  7 12:35:34.232: INFO: Got endpoints: latency-svc-xpj8r [2.777071564s]
Feb  7 12:35:34.400: INFO: Created: latency-svc-5mkj7
Feb  7 12:35:34.409: INFO: Got endpoints: latency-svc-5mkj7 [2.5940668s]
Feb  7 12:35:34.572: INFO: Created: latency-svc-5glrg
Feb  7 12:35:34.598: INFO: Got endpoints: latency-svc-5glrg [2.738943422s]
Feb  7 12:35:34.741: INFO: Created: latency-svc-kqm67
Feb  7 12:35:34.779: INFO: Created: latency-svc-bjqd8
Feb  7 12:35:34.780: INFO: Got endpoints: latency-svc-kqm67 [2.44533683s]
Feb  7 12:35:34.795: INFO: Got endpoints: latency-svc-bjqd8 [2.426725964s]
Feb  7 12:35:34.893: INFO: Created: latency-svc-bc55x
Feb  7 12:35:34.907: INFO: Got endpoints: latency-svc-bc55x [2.389079353s]
Feb  7 12:35:34.936: INFO: Created: latency-svc-slxv4
Feb  7 12:35:34.954: INFO: Got endpoints: latency-svc-slxv4 [2.358480362s]
Feb  7 12:35:35.076: INFO: Created: latency-svc-277ct
Feb  7 12:35:35.105: INFO: Got endpoints: latency-svc-277ct [2.298319445s]
Feb  7 12:35:35.155: INFO: Created: latency-svc-wvfq6
Feb  7 12:35:35.304: INFO: Got endpoints: latency-svc-wvfq6 [2.351008826s]
Feb  7 12:35:35.330: INFO: Created: latency-svc-cz5sd
Feb  7 12:35:35.362: INFO: Got endpoints: latency-svc-cz5sd [2.162353459s]
Feb  7 12:35:35.525: INFO: Created: latency-svc-2fr98
Feb  7 12:35:35.536: INFO: Got endpoints: latency-svc-2fr98 [2.171398047s]
Feb  7 12:35:35.582: INFO: Created: latency-svc-cbgvx
Feb  7 12:35:35.660: INFO: Got endpoints: latency-svc-cbgvx [2.245168627s]
Feb  7 12:35:35.686: INFO: Created: latency-svc-zq6sd
Feb  7 12:35:35.706: INFO: Got endpoints: latency-svc-zq6sd [2.068965371s]
Feb  7 12:35:35.879: INFO: Created: latency-svc-l6m9d
Feb  7 12:35:35.919: INFO: Got endpoints: latency-svc-l6m9d [2.230061427s]
Feb  7 12:35:35.975: INFO: Created: latency-svc-bvdbs
Feb  7 12:35:36.240: INFO: Got endpoints: latency-svc-bvdbs [2.354651s]
Feb  7 12:35:36.273: INFO: Created: latency-svc-jfw5w
Feb  7 12:35:36.295: INFO: Got endpoints: latency-svc-jfw5w [2.063217715s]
Feb  7 12:35:36.515: INFO: Created: latency-svc-mcwp9
Feb  7 12:35:36.541: INFO: Got endpoints: latency-svc-mcwp9 [2.131444482s]
Feb  7 12:35:36.669: INFO: Created: latency-svc-7dhjr
Feb  7 12:35:36.693: INFO: Got endpoints: latency-svc-7dhjr [2.094823012s]
Feb  7 12:35:36.726: INFO: Created: latency-svc-df75q
Feb  7 12:35:36.746: INFO: Got endpoints: latency-svc-df75q [1.965809115s]
Feb  7 12:35:36.893: INFO: Created: latency-svc-xzcjz
Feb  7 12:35:36.913: INFO: Got endpoints: latency-svc-xzcjz [2.117817604s]
Feb  7 12:35:37.046: INFO: Created: latency-svc-n6x8d
Feb  7 12:35:37.076: INFO: Got endpoints: latency-svc-n6x8d [2.168850297s]
Feb  7 12:35:37.123: INFO: Created: latency-svc-rnqvd
Feb  7 12:35:37.251: INFO: Got endpoints: latency-svc-rnqvd [2.296767578s]
Feb  7 12:35:37.294: INFO: Created: latency-svc-srtbf
Feb  7 12:35:37.306: INFO: Got endpoints: latency-svc-srtbf [2.201065238s]
Feb  7 12:35:37.357: INFO: Created: latency-svc-zmjmf
Feb  7 12:35:37.434: INFO: Got endpoints: latency-svc-zmjmf [2.129287216s]
Feb  7 12:35:37.464: INFO: Created: latency-svc-gmtcx
Feb  7 12:35:37.489: INFO: Got endpoints: latency-svc-gmtcx [2.126816149s]
Feb  7 12:35:37.548: INFO: Created: latency-svc-glpqn
Feb  7 12:35:37.642: INFO: Got endpoints: latency-svc-glpqn [2.106323897s]
Feb  7 12:35:37.669: INFO: Created: latency-svc-4w8kf
Feb  7 12:35:37.680: INFO: Got endpoints: latency-svc-4w8kf [2.019597187s]
Feb  7 12:35:37.746: INFO: Created: latency-svc-9w25p
Feb  7 12:35:37.990: INFO: Got endpoints: latency-svc-9w25p [2.284163727s]
Feb  7 12:35:38.013: INFO: Created: latency-svc-t4hnq
Feb  7 12:35:38.192: INFO: Got endpoints: latency-svc-t4hnq [2.272296735s]
Feb  7 12:35:38.378: INFO: Created: latency-svc-d95sq
Feb  7 12:35:38.378: INFO: Got endpoints: latency-svc-d95sq [2.137965798s]
Feb  7 12:35:38.716: INFO: Created: latency-svc-7pmlr
Feb  7 12:35:38.757: INFO: Got endpoints: latency-svc-7pmlr [2.460976722s]
Feb  7 12:35:38.970: INFO: Created: latency-svc-nqhwv
Feb  7 12:35:38.982: INFO: Got endpoints: latency-svc-nqhwv [2.441311528s]
Feb  7 12:35:39.328: INFO: Created: latency-svc-cw9cb
Feb  7 12:35:39.358: INFO: Got endpoints: latency-svc-cw9cb [2.665236434s]
Feb  7 12:35:39.442: INFO: Created: latency-svc-5k2t5
Feb  7 12:35:39.535: INFO: Got endpoints: latency-svc-5k2t5 [2.788786921s]
Feb  7 12:35:39.585: INFO: Created: latency-svc-vzk2d
Feb  7 12:35:39.598: INFO: Got endpoints: latency-svc-vzk2d [2.685634043s]
Feb  7 12:35:39.627: INFO: Created: latency-svc-5c5ts
Feb  7 12:35:39.774: INFO: Got endpoints: latency-svc-5c5ts [2.697830436s]
Feb  7 12:35:39.864: INFO: Created: latency-svc-jkf99
Feb  7 12:35:39.988: INFO: Got endpoints: latency-svc-jkf99 [2.736533396s]
Feb  7 12:35:39.988: INFO: Latencies: [229.554992ms 443.282972ms 499.564192ms 761.261242ms 1.067082645s 1.109445094s 1.619315946s 1.822865813s 1.965809115s 2.019597187s 2.063217715s 2.068965371s 2.094823012s 2.106323897s 2.117817604s 2.126816149s 2.129287216s 2.131444482s 2.137965798s 2.162353459s 2.168850297s 2.171398047s 2.201065238s 2.20425177s 2.230061427s 2.245168627s 2.259201612s 2.265411803s 2.272296735s 2.276827108s 2.284163727s 2.293119266s 2.296767578s 2.298319445s 2.344437455s 2.351008826s 2.35419243s 2.354651s 2.358480362s 2.374315823s 2.389079353s 2.41105024s 2.420129999s 2.425803598s 2.426725964s 2.433398465s 2.441311528s 2.443134937s 2.443773999s 2.44533683s 2.446077099s 2.460976722s 2.471177323s 2.48860113s 2.489033098s 2.505609468s 2.511615678s 2.530741448s 2.534325779s 2.537353973s 2.539906648s 2.545177298s 2.545456875s 2.547823254s 2.548827647s 2.553792858s 2.568376088s 2.571279164s 2.578123429s 2.586392006s 2.588040517s 2.588556749s 2.593359058s 2.5940668s 2.598674941s 2.601023473s 2.603321524s 2.604706909s 2.610509815s 2.612718406s 2.616826925s 2.623486719s 2.632921706s 2.639114919s 2.648609529s 2.65226854s 2.655629549s 2.663982368s 2.665236434s 2.671908942s 2.672769416s 2.681618057s 2.684948727s 2.685300065s 2.685634043s 2.688861599s 2.692477235s 2.695341595s 2.695774972s 2.696975381s 2.697830436s 2.698697307s 2.699060898s 2.707121375s 2.717202075s 2.721774724s 2.725464452s 2.730789735s 2.736533396s 2.738137899s 2.738943422s 2.73997662s 2.758952745s 2.760653363s 2.776194238s 2.777071564s 2.777987431s 2.788786921s 2.792939017s 2.795833388s 2.804181965s 2.810146795s 2.812954481s 2.819055721s 2.822383869s 2.829830068s 2.833683753s 2.836263547s 2.836381697s 2.842180063s 2.863584604s 2.870458548s 2.876568985s 2.879433644s 2.880921327s 2.895355086s 2.899476315s 2.90420376s 2.909175335s 2.913746745s 2.916678486s 2.923807554s 2.92728162s 2.929414179s 2.950964729s 2.963737929s 2.964430856s 2.966177082s 2.976350939s 2.980812234s 2.989265422s 2.997289227s 3.000976553s 3.00345407s 3.004950648s 3.017375774s 3.029919041s 3.036503681s 3.056958815s 3.082447615s 3.087960601s 3.108987594s 3.111455779s 3.116219602s 3.198356913s 3.204370797s 3.209641963s 3.212952758s 3.261153992s 3.270994317s 3.328299156s 3.338297557s 3.339493416s 3.364128169s 3.372846635s 3.452605354s 3.453943881s 3.493302708s 3.507671781s 3.560636095s 3.562917752s 3.617986885s 3.678486207s 3.793226528s 3.820254496s 3.821789544s 4.143203926s 4.16070918s 4.167438529s 4.203247643s 4.211179712s 4.214366644s 4.248701057s 4.294964153s 4.322658469s 4.349220014s 4.439085185s 4.472504002s 4.607304364s 4.702671333s]
Feb  7 12:35:39.988: INFO: 50 %ile: 2.697830436s
Feb  7 12:35:39.988: INFO: 90 %ile: 3.562917752s
Feb  7 12:35:39.988: INFO: 99 %ile: 4.607304364s
Feb  7 12:35:39.988: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:35:39.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-c8w4v" for this suite.
Feb  7 12:36:36.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:36:36.154: INFO: namespace: e2e-tests-svc-latency-c8w4v, resource: bindings, ignored listing per whitelist
Feb  7 12:36:36.184: INFO: namespace e2e-tests-svc-latency-c8w4v deletion completed in 56.155130094s

• [SLOW TEST:107.050 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:36:36.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-7b09b15a-49a6-11ea-abae-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-7b09b3f7-49a6-11ea-abae-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7b09b15a-49a6-11ea-abae-0242ac110005
STEP: Updating configmap cm-test-opt-upd-7b09b3f7-49a6-11ea-abae-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-7b09b417-49a6-11ea-abae-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:38:17.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5vk28" for this suite.
Feb  7 12:38:41.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:38:41.394: INFO: namespace: e2e-tests-projected-5vk28, resource: bindings, ignored listing per whitelist
Feb  7 12:38:41.460: INFO: namespace e2e-tests-projected-5vk28 deletion completed in 24.2786259s

• [SLOW TEST:125.276 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:38:41.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8wkj8
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 12:38:41.595: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 12:39:19.849: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-8wkj8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 12:39:19.849: INFO: >>> kubeConfig: /root/.kube/config
I0207 12:39:19.934286       9 log.go:172] (0xc000b054a0) (0xc001abb400) Create stream
I0207 12:39:19.934356       9 log.go:172] (0xc000b054a0) (0xc001abb400) Stream added, broadcasting: 1
I0207 12:39:19.942260       9 log.go:172] (0xc000b054a0) Reply frame received for 1
I0207 12:39:19.942297       9 log.go:172] (0xc000b054a0) (0xc00228b540) Create stream
I0207 12:39:19.942307       9 log.go:172] (0xc000b054a0) (0xc00228b540) Stream added, broadcasting: 3
I0207 12:39:19.943624       9 log.go:172] (0xc000b054a0) Reply frame received for 3
I0207 12:39:19.943657       9 log.go:172] (0xc000b054a0) (0xc00188b360) Create stream
I0207 12:39:19.943668       9 log.go:172] (0xc000b054a0) (0xc00188b360) Stream added, broadcasting: 5
I0207 12:39:19.944930       9 log.go:172] (0xc000b054a0) Reply frame received for 5
I0207 12:39:20.127041       9 log.go:172] (0xc000b054a0) Data frame received for 3
I0207 12:39:20.127111       9 log.go:172] (0xc00228b540) (3) Data frame handling
I0207 12:39:20.127130       9 log.go:172] (0xc00228b540) (3) Data frame sent
I0207 12:39:20.286090       9 log.go:172] (0xc000b054a0) Data frame received for 1
I0207 12:39:20.286210       9 log.go:172] (0xc000b054a0) (0xc00228b540) Stream removed, broadcasting: 3
I0207 12:39:20.286249       9 log.go:172] (0xc001abb400) (1) Data frame handling
I0207 12:39:20.286265       9 log.go:172] (0xc001abb400) (1) Data frame sent
I0207 12:39:20.286304       9 log.go:172] (0xc000b054a0) (0xc00188b360) Stream removed, broadcasting: 5
I0207 12:39:20.286335       9 log.go:172] (0xc000b054a0) (0xc001abb400) Stream removed, broadcasting: 1
I0207 12:39:20.286378       9 log.go:172] (0xc000b054a0) Go away received
I0207 12:39:20.287060       9 log.go:172] (0xc000b054a0) (0xc001abb400) Stream removed, broadcasting: 1
I0207 12:39:20.287074       9 log.go:172] (0xc000b054a0) (0xc00228b540) Stream removed, broadcasting: 3
I0207 12:39:20.287079       9 log.go:172] (0xc000b054a0) (0xc00188b360) Stream removed, broadcasting: 5
Feb  7 12:39:20.287: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:39:20.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-8wkj8" for this suite.
Feb  7 12:39:46.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:39:46.419: INFO: namespace: e2e-tests-pod-network-test-8wkj8, resource: bindings, ignored listing per whitelist
Feb  7 12:39:46.570: INFO: namespace e2e-tests-pod-network-test-8wkj8 deletion completed in 26.268786393s

• [SLOW TEST:65.110 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:39:46.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ec83b32e-49a6-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:39:47.051: INFO: Waiting up to 5m0s for pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-27nnk" to be "success or failure"
Feb  7 12:39:47.063: INFO: Pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.484375ms
Feb  7 12:39:49.074: INFO: Pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023339277s
Feb  7 12:39:51.086: INFO: Pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035023314s
Feb  7 12:39:53.247: INFO: Pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196176539s
Feb  7 12:39:55.269: INFO: Pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218864869s
Feb  7 12:39:57.300: INFO: Pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.249766035s
STEP: Saw pod success
Feb  7 12:39:57.301: INFO: Pod "pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:39:57.362: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  7 12:39:57.559: INFO: Waiting for pod pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005 to disappear
Feb  7 12:39:57.571: INFO: Pod pod-secrets-ec84dc1d-49a6-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:39:57.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-27nnk" for this suite.
Feb  7 12:40:03.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:40:03.797: INFO: namespace: e2e-tests-secrets-27nnk, resource: bindings, ignored listing per whitelist
Feb  7 12:40:04.186: INFO: namespace e2e-tests-secrets-27nnk deletion completed in 6.60711132s

• [SLOW TEST:17.615 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:40:04.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  7 12:40:04.375: INFO: Waiting up to 5m0s for pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-wpzmn" to be "success or failure"
Feb  7 12:40:04.385: INFO: Pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.812642ms
Feb  7 12:40:06.402: INFO: Pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027026856s
Feb  7 12:40:08.423: INFO: Pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047856873s
Feb  7 12:40:10.434: INFO: Pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059208745s
Feb  7 12:40:12.450: INFO: Pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075471883s
Feb  7 12:40:14.605: INFO: Pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.230087972s
STEP: Saw pod success
Feb  7 12:40:14.605: INFO: Pod "downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:40:14.627: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  7 12:40:15.110: INFO: Waiting for pod downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005 to disappear
Feb  7 12:40:15.128: INFO: Pod downward-api-f6dc8ff7-49a6-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:40:15.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wpzmn" for this suite.
Feb  7 12:40:21.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:40:21.306: INFO: namespace: e2e-tests-downward-api-wpzmn, resource: bindings, ignored listing per whitelist
Feb  7 12:40:21.408: INFO: namespace e2e-tests-downward-api-wpzmn deletion completed in 6.263495554s

• [SLOW TEST:17.222 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:40:21.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  7 12:40:21.601: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:40:42.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-zfwff" for this suite.
Feb  7 12:40:50.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:40:50.755: INFO: namespace: e2e-tests-init-container-zfwff, resource: bindings, ignored listing per whitelist
Feb  7 12:40:50.755: INFO: namespace e2e-tests-init-container-zfwff deletion completed in 8.444739556s

• [SLOW TEST:29.346 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:40:50.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 12:40:51.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-dxlnl" to be "success or failure"
Feb  7 12:40:51.201: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.58921ms
Feb  7 12:40:53.251: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058117298s
Feb  7 12:40:55.261: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06840642s
Feb  7 12:40:57.715: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.522266364s
Feb  7 12:40:59.725: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532741966s
Feb  7 12:41:01.739: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.546223218s
Feb  7 12:41:04.122: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.930098484s
STEP: Saw pod success
Feb  7 12:41:04.123: INFO: Pod "downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:41:04.139: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 12:41:04.249: INFO: Waiting for pod downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005 to disappear
Feb  7 12:41:04.265: INFO: Pod downwardapi-volume-12c9b158-49a7-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:41:04.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dxlnl" for this suite.
Feb  7 12:41:10.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:41:10.645: INFO: namespace: e2e-tests-projected-dxlnl, resource: bindings, ignored listing per whitelist
Feb  7 12:41:10.645: INFO: namespace e2e-tests-projected-dxlnl deletion completed in 6.359423759s

• [SLOW TEST:19.891 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:41:10.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6v6z6
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb  7 12:41:10.939: INFO: Found 0 stateful pods, waiting for 3
Feb  7 12:41:21.062: INFO: Found 2 stateful pods, waiting for 3
Feb  7 12:41:30.969: INFO: Found 2 stateful pods, waiting for 3
Feb  7 12:41:41.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:41:41.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:41:41.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 12:41:50.959: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:41:50.959: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:41:50.959: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  7 12:41:51.015: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  7 12:42:01.534: INFO: Updating stateful set ss2
Feb  7 12:42:01.556: INFO: Waiting for Pod e2e-tests-statefulset-6v6z6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 12:42:11.582: INFO: Waiting for Pod e2e-tests-statefulset-6v6z6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  7 12:42:22.171: INFO: Found 2 stateful pods, waiting for 3
Feb  7 12:42:32.313: INFO: Found 2 stateful pods, waiting for 3
Feb  7 12:42:42.207: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:42:42.207: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:42:42.207: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 12:42:52.191: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:42:52.191: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 12:42:52.191: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  7 12:42:52.223: INFO: Updating stateful set ss2
Feb  7 12:42:52.299: INFO: Waiting for Pod e2e-tests-statefulset-6v6z6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 12:43:02.339: INFO: Waiting for Pod e2e-tests-statefulset-6v6z6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 12:43:12.438: INFO: Updating stateful set ss2
Feb  7 12:43:12.667: INFO: Waiting for StatefulSet e2e-tests-statefulset-6v6z6/ss2 to complete update
Feb  7 12:43:12.667: INFO: Waiting for Pod e2e-tests-statefulset-6v6z6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 12:43:22.732: INFO: Waiting for StatefulSet e2e-tests-statefulset-6v6z6/ss2 to complete update
Feb  7 12:43:22.732: INFO: Waiting for Pod e2e-tests-statefulset-6v6z6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  7 12:43:42.700: INFO: Waiting for StatefulSet e2e-tests-statefulset-6v6z6/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  7 12:43:52.685: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6v6z6
Feb  7 12:43:52.689: INFO: Scaling statefulset ss2 to 0
Feb  7 12:44:22.808: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 12:44:22.822: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:44:22.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6v6z6" for this suite.
Feb  7 12:44:30.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:44:31.070: INFO: namespace: e2e-tests-statefulset-6v6z6, resource: bindings, ignored listing per whitelist
Feb  7 12:44:31.131: INFO: namespace e2e-tests-statefulset-6v6z6 deletion completed in 8.238613077s

• [SLOW TEST:200.485 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:44:31.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:44:44.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-lg6mt" for this suite.
Feb  7 12:45:08.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:45:08.582: INFO: namespace: e2e-tests-replication-controller-lg6mt, resource: bindings, ignored listing per whitelist
Feb  7 12:45:08.625: INFO: namespace e2e-tests-replication-controller-lg6mt deletion completed in 24.246948363s

• [SLOW TEST:37.493 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:45:08.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  7 12:45:08.834: INFO: Waiting up to 5m0s for pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-2kf2f" to be "success or failure"
Feb  7 12:45:08.845: INFO: Pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.114383ms
Feb  7 12:45:10.975: INFO: Pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141372682s
Feb  7 12:45:12.988: INFO: Pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153857673s
Feb  7 12:45:15.161: INFO: Pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.326793419s
Feb  7 12:45:17.179: INFO: Pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345588396s
Feb  7 12:45:19.206: INFO: Pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.372494295s
STEP: Saw pod success
Feb  7 12:45:19.206: INFO: Pod "pod-ac5913fd-49a7-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:45:19.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ac5913fd-49a7-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 12:45:19.394: INFO: Waiting for pod pod-ac5913fd-49a7-11ea-abae-0242ac110005 to disappear
Feb  7 12:45:19.402: INFO: Pod pod-ac5913fd-49a7-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:45:19.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2kf2f" for this suite.
Feb  7 12:45:25.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:45:25.505: INFO: namespace: e2e-tests-emptydir-2kf2f, resource: bindings, ignored listing per whitelist
Feb  7 12:45:25.578: INFO: namespace e2e-tests-emptydir-2kf2f deletion completed in 6.164977899s

• [SLOW TEST:16.953 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:45:25.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-b6742820-49a7-11ea-abae-0242ac110005
STEP: Creating secret with name s-test-opt-upd-b6742888-49a7-11ea-abae-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b6742820-49a7-11ea-abae-0242ac110005
STEP: Updating secret s-test-opt-upd-b6742888-49a7-11ea-abae-0242ac110005
STEP: Creating secret with name s-test-opt-create-b67428a8-49a7-11ea-abae-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:46:52.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kps5l" for this suite.
Feb  7 12:47:16.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:47:16.720: INFO: namespace: e2e-tests-projected-kps5l, resource: bindings, ignored listing per whitelist
Feb  7 12:47:16.862: INFO: namespace e2e-tests-projected-kps5l deletion completed in 24.402521003s

• [SLOW TEST:111.284 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:47:16.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 12:47:17.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-tw2mq" to be "success or failure"
Feb  7 12:47:17.193: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.442ms
Feb  7 12:47:19.209: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041239458s
Feb  7 12:47:21.245: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077334656s
Feb  7 12:47:23.903: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.734837503s
Feb  7 12:47:25.932: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.764307971s
Feb  7 12:47:27.949: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.78103298s
Feb  7 12:47:29.963: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.794691309s
STEP: Saw pod success
Feb  7 12:47:29.963: INFO: Pod "downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:47:29.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 12:47:30.067: INFO: Waiting for pod downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005 to disappear
Feb  7 12:47:30.190: INFO: Pod downwardapi-volume-f8ca1eb7-49a7-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:47:30.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tw2mq" for this suite.
Feb  7 12:47:36.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:47:36.382: INFO: namespace: e2e-tests-downward-api-tw2mq, resource: bindings, ignored listing per whitelist
Feb  7 12:47:36.447: INFO: namespace e2e-tests-downward-api-tw2mq deletion completed in 6.244290533s

• [SLOW TEST:19.585 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:47:36.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0480c663-49a8-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:47:36.732: INFO: Waiting up to 5m0s for pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-22dnc" to be "success or failure"
Feb  7 12:47:36.748: INFO: Pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.572829ms
Feb  7 12:47:38.954: INFO: Pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222250371s
Feb  7 12:47:40.990: INFO: Pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257608151s
Feb  7 12:47:43.405: INFO: Pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673212174s
Feb  7 12:47:45.447: INFO: Pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.715187261s
Feb  7 12:47:48.562: INFO: Pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.830108131s
STEP: Saw pod success
Feb  7 12:47:48.563: INFO: Pod "pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:47:48.620: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  7 12:47:49.799: INFO: Waiting for pod pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005 to disappear
Feb  7 12:47:49.820: INFO: Pod pod-secrets-04820bdd-49a8-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:47:49.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-22dnc" for this suite.
Feb  7 12:47:56.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:47:56.452: INFO: namespace: e2e-tests-secrets-22dnc, resource: bindings, ignored listing per whitelist
Feb  7 12:47:56.523: INFO: namespace e2e-tests-secrets-22dnc deletion completed in 6.689091066s

• [SLOW TEST:20.075 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:47:56.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 12:47:56.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-xnk7v" to be "success or failure"
Feb  7 12:47:56.815: INFO: Pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.654153ms
Feb  7 12:47:58.846: INFO: Pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042567181s
Feb  7 12:48:00.859: INFO: Pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055094699s
Feb  7 12:48:02.885: INFO: Pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081105661s
Feb  7 12:48:04.907: INFO: Pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102868616s
Feb  7 12:48:06.922: INFO: Pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118587488s
STEP: Saw pod success
Feb  7 12:48:06.922: INFO: Pod "downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:48:06.932: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 12:48:07.263: INFO: Waiting for pod downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005 to disappear
Feb  7 12:48:07.272: INFO: Pod downwardapi-volume-1077b4e5-49a8-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:48:07.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xnk7v" for this suite.
Feb  7 12:48:13.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:48:13.537: INFO: namespace: e2e-tests-downward-api-xnk7v, resource: bindings, ignored listing per whitelist
Feb  7 12:48:13.631: INFO: namespace e2e-tests-downward-api-xnk7v deletion completed in 6.352822058s

• [SLOW TEST:17.108 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:48:13.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb  7 12:48:13.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:15.843: INFO: stderr: ""
Feb  7 12:48:15.843: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  7 12:48:15.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:16.234: INFO: stderr: ""
Feb  7 12:48:16.234: INFO: stdout: "update-demo-nautilus-f8k54 update-demo-nautilus-fx7q2 "
Feb  7 12:48:16.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8k54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:16.666: INFO: stderr: ""
Feb  7 12:48:16.666: INFO: stdout: ""
Feb  7 12:48:16.666: INFO: update-demo-nautilus-f8k54 is created but not running
Feb  7 12:48:21.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:21.870: INFO: stderr: ""
Feb  7 12:48:21.870: INFO: stdout: "update-demo-nautilus-f8k54 update-demo-nautilus-fx7q2 "
Feb  7 12:48:21.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8k54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:21.975: INFO: stderr: ""
Feb  7 12:48:21.975: INFO: stdout: ""
Feb  7 12:48:21.975: INFO: update-demo-nautilus-f8k54 is created but not running
Feb  7 12:48:26.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:27.168: INFO: stderr: ""
Feb  7 12:48:27.168: INFO: stdout: "update-demo-nautilus-f8k54 update-demo-nautilus-fx7q2 "
Feb  7 12:48:27.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8k54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:27.286: INFO: stderr: ""
Feb  7 12:48:27.286: INFO: stdout: ""
Feb  7 12:48:27.286: INFO: update-demo-nautilus-f8k54 is created but not running
Feb  7 12:48:32.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:32.511: INFO: stderr: ""
Feb  7 12:48:32.512: INFO: stdout: "update-demo-nautilus-f8k54 update-demo-nautilus-fx7q2 "
Feb  7 12:48:32.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8k54 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:32.621: INFO: stderr: ""
Feb  7 12:48:32.622: INFO: stdout: "true"
Feb  7 12:48:32.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8k54 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:32.768: INFO: stderr: ""
Feb  7 12:48:32.768: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 12:48:32.768: INFO: validating pod update-demo-nautilus-f8k54
Feb  7 12:48:32.858: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 12:48:32.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 12:48:32.858: INFO: update-demo-nautilus-f8k54 is verified up and running
Feb  7 12:48:32.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx7q2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:32.993: INFO: stderr: ""
Feb  7 12:48:32.993: INFO: stdout: "true"
Feb  7 12:48:32.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx7q2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:33.117: INFO: stderr: ""
Feb  7 12:48:33.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  7 12:48:33.118: INFO: validating pod update-demo-nautilus-fx7q2
Feb  7 12:48:33.135: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  7 12:48:33.135: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  7 12:48:33.135: INFO: update-demo-nautilus-fx7q2 is verified up and running
STEP: using delete to clean up resources
Feb  7 12:48:33.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:33.231: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 12:48:33.232: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  7 12:48:33.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-lpl8r'
Feb  7 12:48:33.341: INFO: stderr: "No resources found.\n"
Feb  7 12:48:33.341: INFO: stdout: ""
Feb  7 12:48:33.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-lpl8r -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 12:48:33.477: INFO: stderr: ""
Feb  7 12:48:33.477: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:48:33.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lpl8r" for this suite.
Feb  7 12:48:59.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:48:59.784: INFO: namespace: e2e-tests-kubectl-lpl8r, resource: bindings, ignored listing per whitelist
Feb  7 12:48:59.839: INFO: namespace e2e-tests-kubectl-lpl8r deletion completed in 26.342555439s

• [SLOW TEST:46.208 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:48:59.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  7 12:52:03.311: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:03.359: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:05.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:05.376: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:07.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:07.374: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:09.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:09.380: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:11.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:11.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:13.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:13.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:15.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:15.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:17.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:17.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:19.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:19.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:21.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:21.376: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:23.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:23.379: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:25.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:25.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:27.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:27.370: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:29.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:29.373: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:31.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:31.379: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:33.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:33.374: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:35.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:35.421: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:37.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:37.385: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:39.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:39.386: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:41.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:41.384: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:43.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:43.397: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:45.360: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:45.435: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:47.362: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:47.381: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:49.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:49.394: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:51.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:51.389: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:53.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:53.388: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:55.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:55.381: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:57.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:57.382: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:52:59.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:52:59.390: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:01.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:01.381: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:03.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:03.385: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:05.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:05.385: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:07.360: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:07.386: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:09.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:09.379: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:11.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:11.374: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:13.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:13.377: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:15.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:15.378: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:17.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:17.387: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:19.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:19.381: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:21.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:21.377: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:23.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:23.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:25.360: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:25.382: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:27.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:27.398: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:29.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:29.378: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:31.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:31.373: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:33.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:33.378: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:35.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:35.378: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:37.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:37.381: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:39.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:39.380: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:41.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:41.373: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  7 12:53:43.359: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  7 12:53:43.379: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:53:43.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pbl6q" for this suite.
Feb  7 12:54:09.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:54:09.549: INFO: namespace: e2e-tests-container-lifecycle-hook-pbl6q, resource: bindings, ignored listing per whitelist
Feb  7 12:54:09.576: INFO: namespace e2e-tests-container-lifecycle-hook-pbl6q deletion completed in 26.187986442s

• [SLOW TEST:309.736 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:54:09.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 12:54:09.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-wnwn4" to be "success or failure"
Feb  7 12:54:09.925: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 164.112123ms
Feb  7 12:54:11.994: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233973574s
Feb  7 12:54:14.027: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267044323s
Feb  7 12:54:16.045: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284907714s
Feb  7 12:54:18.094: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.334037784s
Feb  7 12:54:20.112: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.352040826s
Feb  7 12:54:22.150: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.389947927s
STEP: Saw pod success
Feb  7 12:54:22.150: INFO: Pod "downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:54:22.173: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 12:54:22.615: INFO: Waiting for pod downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005 to disappear
Feb  7 12:54:22.624: INFO: Pod downwardapi-volume-eec363cd-49a8-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:54:22.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wnwn4" for this suite.
Feb  7 12:54:28.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:54:28.756: INFO: namespace: e2e-tests-downward-api-wnwn4, resource: bindings, ignored listing per whitelist
Feb  7 12:54:28.849: INFO: namespace e2e-tests-downward-api-wnwn4 deletion completed in 6.220514696s

• [SLOW TEST:19.273 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:54:28.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  7 12:54:29.321: INFO: Number of nodes with available pods: 0
Feb  7 12:54:29.321: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:30.360: INFO: Number of nodes with available pods: 0
Feb  7 12:54:30.360: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:31.361: INFO: Number of nodes with available pods: 0
Feb  7 12:54:31.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:32.399: INFO: Number of nodes with available pods: 0
Feb  7 12:54:32.399: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:33.345: INFO: Number of nodes with available pods: 0
Feb  7 12:54:33.345: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:34.337: INFO: Number of nodes with available pods: 0
Feb  7 12:54:34.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:35.350: INFO: Number of nodes with available pods: 0
Feb  7 12:54:35.350: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:36.404: INFO: Number of nodes with available pods: 0
Feb  7 12:54:36.404: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:41.371: INFO: Number of nodes with available pods: 0
Feb  7 12:54:41.371: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:42.339: INFO: Number of nodes with available pods: 0
Feb  7 12:54:42.339: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:44.147: INFO: Number of nodes with available pods: 0
Feb  7 12:54:44.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:45.253: INFO: Number of nodes with available pods: 0
Feb  7 12:54:45.254: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:45.349: INFO: Number of nodes with available pods: 0
Feb  7 12:54:45.349: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:49.176: INFO: Number of nodes with available pods: 0
Feb  7 12:54:49.176: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:49.448: INFO: Number of nodes with available pods: 1
Feb  7 12:54:49.448: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  7 12:54:49.975: INFO: Number of nodes with available pods: 0
Feb  7 12:54:49.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:58.479: INFO: Number of nodes with available pods: 0
Feb  7 12:54:58.479: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:54:59.397: INFO: Number of nodes with available pods: 0
Feb  7 12:54:59.397: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:00.294: INFO: Number of nodes with available pods: 0
Feb  7 12:55:00.295: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:01.554: INFO: Number of nodes with available pods: 0
Feb  7 12:55:01.554: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:02.014: INFO: Number of nodes with available pods: 0
Feb  7 12:55:02.014: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:03.831: INFO: Number of nodes with available pods: 0
Feb  7 12:55:03.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:05.306: INFO: Number of nodes with available pods: 0
Feb  7 12:55:05.306: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:06.026: INFO: Number of nodes with available pods: 0
Feb  7 12:55:06.026: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:07.034: INFO: Number of nodes with available pods: 0
Feb  7 12:55:07.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:09.940: INFO: Number of nodes with available pods: 0
Feb  7 12:55:09.940: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:10.075: INFO: Number of nodes with available pods: 0
Feb  7 12:55:10.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:11.001: INFO: Number of nodes with available pods: 0
Feb  7 12:55:11.001: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:12.008: INFO: Number of nodes with available pods: 0
Feb  7 12:55:12.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:13.015: INFO: Number of nodes with available pods: 0
Feb  7 12:55:13.015: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  7 12:55:14.048: INFO: Number of nodes with available pods: 1
Feb  7 12:55:14.048: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6db45, will wait for the garbage collector to delete the pods
Feb  7 12:55:14.149: INFO: Deleting DaemonSet.extensions daemon-set took: 38.635926ms
Feb  7 12:55:14.349: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.469952ms
Feb  7 12:55:32.774: INFO: Number of nodes with available pods: 0
Feb  7 12:55:32.774: INFO: Number of running nodes: 0, number of available pods: 0
Feb  7 12:55:32.779: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6db45/daemonsets","resourceVersion":"20868278"},"items":null}

Feb  7 12:55:32.783: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6db45/pods","resourceVersion":"20868278"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:55:32.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6db45" for this suite.
Feb  7 12:55:40.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:55:40.984: INFO: namespace: e2e-tests-daemonsets-6db45, resource: bindings, ignored listing per whitelist
Feb  7 12:55:41.044: INFO: namespace e2e-tests-daemonsets-6db45 deletion completed in 8.236251644s

• [SLOW TEST:72.194 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:55:41.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-255ddb3e-49a9-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:55:41.423: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-j54n2" to be "success or failure"
Feb  7 12:55:41.434: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.157076ms
Feb  7 12:55:43.504: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081252974s
Feb  7 12:55:45.534: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110429416s
Feb  7 12:55:49.544: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120365727s
Feb  7 12:55:51.559: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.136152744s
Feb  7 12:55:53.667: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.243912237s
Feb  7 12:55:55.681: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.257645828s
STEP: Saw pod success
Feb  7 12:55:55.681: INFO: Pod "pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:55:55.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  7 12:55:56.722: INFO: Waiting for pod pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005 to disappear
Feb  7 12:55:56.744: INFO: Pod pod-projected-secrets-2560ab4c-49a9-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:55:56.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j54n2" for this suite.
Feb  7 12:56:02.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:56:02.929: INFO: namespace: e2e-tests-projected-j54n2, resource: bindings, ignored listing per whitelist
Feb  7 12:56:02.933: INFO: namespace e2e-tests-projected-j54n2 deletion completed in 6.18350309s

• [SLOW TEST:21.889 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:56:02.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-327623da-49a9-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 12:56:03.347: INFO: Waiting up to 5m0s for pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-kbtvg" to be "success or failure"
Feb  7 12:56:03.356: INFO: Pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409185ms
Feb  7 12:56:05.380: INFO: Pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0323957s
Feb  7 12:56:07.393: INFO: Pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045506174s
Feb  7 12:56:09.550: INFO: Pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202659094s
Feb  7 12:56:11.567: INFO: Pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219512181s
Feb  7 12:56:13.618: INFO: Pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.270383327s
STEP: Saw pod success
Feb  7 12:56:13.618: INFO: Pod "pod-secrets-3277b738-49a9-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:56:13.728: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3277b738-49a9-11ea-abae-0242ac110005 container secret-env-test: 
STEP: delete the pod
Feb  7 12:56:13.891: INFO: Waiting for pod pod-secrets-3277b738-49a9-11ea-abae-0242ac110005 to disappear
Feb  7 12:56:13.911: INFO: Pod pod-secrets-3277b738-49a9-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:56:13.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kbtvg" for this suite.
Feb  7 12:56:20.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:56:20.166: INFO: namespace: e2e-tests-secrets-kbtvg, resource: bindings, ignored listing per whitelist
Feb  7 12:56:20.181: INFO: namespace e2e-tests-secrets-kbtvg deletion completed in 6.253695436s

• [SLOW TEST:17.248 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:56:20.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 12:56:20.472: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-qm74d" to be "success or failure"
Feb  7 12:56:20.543: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.417663ms
Feb  7 12:56:22.868: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39558392s
Feb  7 12:56:25.206: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.73414514s
Feb  7 12:56:27.217: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.74532683s
Feb  7 12:56:30.071: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.599124898s
Feb  7 12:56:32.128: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.655550839s
Feb  7 12:56:34.141: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.668889251s
Feb  7 12:56:36.155: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.682548171s
STEP: Saw pod success
Feb  7 12:56:36.155: INFO: Pod "downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:56:36.158: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 12:56:37.511: INFO: Waiting for pod downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005 to disappear
Feb  7 12:56:37.523: INFO: Pod downwardapi-volume-3caa85a6-49a9-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:56:37.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qm74d" for this suite.
Feb  7 12:56:43.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:56:43.810: INFO: namespace: e2e-tests-downward-api-qm74d, resource: bindings, ignored listing per whitelist
Feb  7 12:56:43.903: INFO: namespace e2e-tests-downward-api-qm74d deletion completed in 6.36465621s

• [SLOW TEST:23.722 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:56:43.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb  7 12:56:54.331: INFO: Pod pod-hostip-4ae0b170-49a9-11ea-abae-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:56:54.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-c7wcd" for this suite.
Feb  7 12:57:18.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:57:18.464: INFO: namespace: e2e-tests-pods-c7wcd, resource: bindings, ignored listing per whitelist
Feb  7 12:57:18.655: INFO: namespace e2e-tests-pods-c7wcd deletion completed in 24.316356325s

• [SLOW TEST:34.752 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:57:18.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  7 12:57:19.035: INFO: Waiting up to 5m0s for pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005" in namespace "e2e-tests-downward-api-j5hq4" to be "success or failure"
Feb  7 12:57:19.146: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 111.318882ms
Feb  7 12:57:22.201: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.166007167s
Feb  7 12:57:24.224: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.188973959s
Feb  7 12:57:26.570: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.534868446s
Feb  7 12:57:28.616: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.581602351s
Feb  7 12:57:30.651: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.616372926s
Feb  7 12:57:32.664: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.629395074s
STEP: Saw pod success
Feb  7 12:57:32.664: INFO: Pod "downward-api-5f8e4789-49a9-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:57:32.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5f8e4789-49a9-11ea-abae-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  7 12:57:33.947: INFO: Waiting for pod downward-api-5f8e4789-49a9-11ea-abae-0242ac110005 to disappear
Feb  7 12:57:34.304: INFO: Pod downward-api-5f8e4789-49a9-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:57:34.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j5hq4" for this suite.
Feb  7 12:57:40.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:57:40.411: INFO: namespace: e2e-tests-downward-api-j5hq4, resource: bindings, ignored listing per whitelist
Feb  7 12:57:40.602: INFO: namespace e2e-tests-downward-api-j5hq4 deletion completed in 6.281216667s

• [SLOW TEST:21.947 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:57:40.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-6cb91273-49a9-11ea-abae-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  7 12:57:41.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005" in namespace "e2e-tests-configmap-mpf25" to be "success or failure"
Feb  7 12:57:41.229: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 112.154736ms
Feb  7 12:57:43.251: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133783605s
Feb  7 12:57:45.436: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318764723s
Feb  7 12:57:47.460: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342639928s
Feb  7 12:57:49.482: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.364908017s
Feb  7 12:57:51.528: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.41048377s
Feb  7 12:57:53.544: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.42708705s
Feb  7 12:57:55.554: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.437341134s
STEP: Saw pod success
Feb  7 12:57:55.554: INFO: Pod "pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 12:57:55.563: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  7 12:57:55.760: INFO: Waiting for pod pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005 to disappear
Feb  7 12:57:55.775: INFO: Pod pod-configmaps-6cbbe49b-49a9-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:57:55.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mpf25" for this suite.
Feb  7 12:58:03.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:58:03.922: INFO: namespace: e2e-tests-configmap-mpf25, resource: bindings, ignored listing per whitelist
Feb  7 12:58:04.088: INFO: namespace e2e-tests-configmap-mpf25 deletion completed in 8.294948828s

• [SLOW TEST:23.486 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:58:04.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-2zg9
STEP: Creating a pod to test atomic-volume-subpath
Feb  7 12:58:04.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2zg9" in namespace "e2e-tests-subpath-4wcvv" to be "success or failure"
Feb  7 12:58:04.529: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.938404ms
Feb  7 12:58:06.815: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300422825s
Feb  7 12:58:08.830: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315733479s
Feb  7 12:58:11.517: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.002696736s
Feb  7 12:58:13.527: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.012497905s
Feb  7 12:58:15.545: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.030454358s
Feb  7 12:58:17.560: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.046184277s
Feb  7 12:58:19.714: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.199968471s
Feb  7 12:58:22.467: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.952765619s
Feb  7 12:58:24.545: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 20.030466477s
Feb  7 12:58:26.575: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 22.060873382s
Feb  7 12:58:28.601: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 24.087227074s
Feb  7 12:58:30.632: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 26.117782453s
Feb  7 12:58:32.764: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 28.250275602s
Feb  7 12:58:34.786: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 30.271873281s
Feb  7 12:58:36.834: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 32.320164541s
Feb  7 12:58:38.851: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 34.336726186s
Feb  7 12:58:40.872: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Running", Reason="", readiness=false. Elapsed: 36.357417218s
Feb  7 12:58:43.649: INFO: Pod "pod-subpath-test-projected-2zg9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.134961225s
STEP: Saw pod success
Feb  7 12:58:43.649: INFO: Pod "pod-subpath-test-projected-2zg9" satisfied condition "success or failure"
Feb  7 12:58:43.656: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-2zg9 container test-container-subpath-projected-2zg9: 
STEP: delete the pod
Feb  7 12:58:45.246: INFO: Waiting for pod pod-subpath-test-projected-2zg9 to disappear
Feb  7 12:58:45.394: INFO: Pod pod-subpath-test-projected-2zg9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-2zg9
Feb  7 12:58:45.394: INFO: Deleting pod "pod-subpath-test-projected-2zg9" in namespace "e2e-tests-subpath-4wcvv"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:58:45.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-4wcvv" for this suite.
Feb  7 12:58:51.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:58:51.497: INFO: namespace: e2e-tests-subpath-4wcvv, resource: bindings, ignored listing per whitelist
Feb  7 12:58:51.617: INFO: namespace e2e-tests-subpath-4wcvv deletion completed in 6.195966759s

• [SLOW TEST:47.528 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:58:51.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb  7 12:58:51.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrv25'
Feb  7 12:58:55.064: INFO: stderr: ""
Feb  7 12:58:55.065: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb  7 12:58:57.128: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:58:57.128: INFO: Found 0 / 1
Feb  7 12:58:58.567: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:58:58.567: INFO: Found 0 / 1
Feb  7 12:58:59.079: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:58:59.079: INFO: Found 0 / 1
Feb  7 12:59:00.078: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:00.078: INFO: Found 0 / 1
Feb  7 12:59:01.080: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:01.080: INFO: Found 0 / 1
Feb  7 12:59:02.093: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:02.093: INFO: Found 0 / 1
Feb  7 12:59:04.147: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:04.148: INFO: Found 0 / 1
Feb  7 12:59:05.092: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:05.092: INFO: Found 0 / 1
Feb  7 12:59:06.099: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:06.099: INFO: Found 0 / 1
Feb  7 12:59:07.078: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:07.078: INFO: Found 0 / 1
Feb  7 12:59:08.084: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:08.084: INFO: Found 1 / 1
Feb  7 12:59:08.084: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  7 12:59:08.094: INFO: Selector matched 1 pods for map[app:redis]
Feb  7 12:59:08.094: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  7 12:59:08.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2m25l redis-master --namespace=e2e-tests-kubectl-wrv25'
Feb  7 12:59:08.338: INFO: stderr: ""
Feb  7 12:59:08.338: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Feb 12:59:06.673 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Feb 12:59:06.673 # Server started, Redis version 3.2.12\n1:M 07 Feb 12:59:06.674 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Feb 12:59:06.674 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  7 12:59:08.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2m25l redis-master --namespace=e2e-tests-kubectl-wrv25 --tail=1'
Feb  7 12:59:08.676: INFO: stderr: ""
Feb  7 12:59:08.676: INFO: stdout: "1:M 07 Feb 12:59:06.674 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  7 12:59:08.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2m25l redis-master --namespace=e2e-tests-kubectl-wrv25 --limit-bytes=1'
Feb  7 12:59:08.801: INFO: stderr: ""
Feb  7 12:59:08.801: INFO: stdout: " "
STEP: exposing timestamps
Feb  7 12:59:08.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2m25l redis-master --namespace=e2e-tests-kubectl-wrv25 --tail=1 --timestamps'
Feb  7 12:59:08.967: INFO: stderr: ""
Feb  7 12:59:08.967: INFO: stdout: "2020-02-07T12:59:06.674772934Z 1:M 07 Feb 12:59:06.674 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  7 12:59:11.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2m25l redis-master --namespace=e2e-tests-kubectl-wrv25 --since=1s'
Feb  7 12:59:11.664: INFO: stderr: ""
Feb  7 12:59:11.664: INFO: stdout: ""
Feb  7 12:59:11.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2m25l redis-master --namespace=e2e-tests-kubectl-wrv25 --since=24h'
Feb  7 12:59:11.807: INFO: stderr: ""
Feb  7 12:59:11.807: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Feb 12:59:06.673 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Feb 12:59:06.673 # Server started, Redis version 3.2.12\n1:M 07 Feb 12:59:06.674 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Feb 12:59:06.674 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb  7 12:59:11.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wrv25'
Feb  7 12:59:11.966: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 12:59:11.966: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  7 12:59:11.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-wrv25'
Feb  7 12:59:12.181: INFO: stderr: "No resources found.\n"
Feb  7 12:59:12.181: INFO: stdout: ""
Feb  7 12:59:12.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-wrv25 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  7 12:59:12.333: INFO: stderr: ""
Feb  7 12:59:12.333: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 12:59:12.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wrv25" for this suite.
Feb  7 12:59:36.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 12:59:36.683: INFO: namespace: e2e-tests-kubectl-wrv25, resource: bindings, ignored listing per whitelist
Feb  7 12:59:36.727: INFO: namespace e2e-tests-kubectl-wrv25 deletion completed in 24.344887867s

• [SLOW TEST:45.110 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 12:59:36.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s7lcr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s7lcr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s7lcr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.237.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.237.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.237.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.237.175_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s7lcr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s7lcr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s7lcr.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s7lcr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.237.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.237.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.237.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.237.175_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  7 13:00:01.402: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.482: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.490: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-s7lcr from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.500: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-s7lcr from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.508: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.513: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.517: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.523: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.528: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.533: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.540: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.546: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.550: INFO: Unable to read 10.103.237.175_udp@PTR from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.554: INFO: Unable to read 10.103.237.175_tcp@PTR from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.560: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.564: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.618: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s7lcr from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.633: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s7lcr from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.638: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.644: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.651: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.657: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.663: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.668: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.673: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.678: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.682: INFO: Unable to read 10.103.237.175_udp@PTR from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.687: INFO: Unable to read 10.103.237.175_tcp@PTR from pod e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005: the server could not find the requested resource (get pods dns-test-b1db3194-49a9-11ea-abae-0242ac110005)
Feb  7 13:00:01.687: INFO: Lookups using e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-s7lcr wheezy_tcp@dns-test-service.e2e-tests-dns-s7lcr wheezy_udp@dns-test-service.e2e-tests-dns-s7lcr.svc wheezy_tcp@dns-test-service.e2e-tests-dns-s7lcr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.103.237.175_udp@PTR 10.103.237.175_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-s7lcr jessie_tcp@dns-test-service.e2e-tests-dns-s7lcr jessie_udp@dns-test-service.e2e-tests-dns-s7lcr.svc jessie_tcp@dns-test-service.e2e-tests-dns-s7lcr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s7lcr.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s7lcr.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.103.237.175_udp@PTR 10.103.237.175_tcp@PTR]

Feb  7 13:00:07.184: INFO: DNS probes using e2e-tests-dns-s7lcr/dns-test-b1db3194-49a9-11ea-abae-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:00:07.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-s7lcr" for this suite.
Feb  7 13:00:15.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:00:15.926: INFO: namespace: e2e-tests-dns-s7lcr, resource: bindings, ignored listing per whitelist
Feb  7 13:00:16.150: INFO: namespace e2e-tests-dns-s7lcr deletion completed in 8.603933026s

• [SLOW TEST:39.422 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:00:16.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 13:00:16.390: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  7 13:00:21.409: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 13:00:27.452: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  7 13:00:27.680: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-j7gjf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-j7gjf/deployments/test-cleanup-deployment,UID:cff13dd4-49a9-11ea-a994-fa163e34d433,ResourceVersion:20868917,Generation:1,CreationTimestamp:2020-02-07 13:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  7 13:00:27.687: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:00:27.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-j7gjf" for this suite.
Feb  7 13:00:35.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:00:35.901: INFO: namespace: e2e-tests-deployment-j7gjf, resource: bindings, ignored listing per whitelist
Feb  7 13:00:36.040: INFO: namespace e2e-tests-deployment-j7gjf deletion completed in 8.333866465s

• [SLOW TEST:19.890 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:00:36.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-99cgv
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-99cgv
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-99cgv
Feb  7 13:00:36.776: INFO: Found 0 stateful pods, waiting for 1
Feb  7 13:00:46.793: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb  7 13:00:56.784: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  7 13:00:56.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:00:57.571: INFO: stderr: "I0207 13:00:56.981299    2748 log.go:172] (0xc0006fa370) (0xc00073a640) Create stream\nI0207 13:00:56.981688    2748 log.go:172] (0xc0006fa370) (0xc00073a640) Stream added, broadcasting: 1\nI0207 13:00:56.988023    2748 log.go:172] (0xc0006fa370) Reply frame received for 1\nI0207 13:00:56.988072    2748 log.go:172] (0xc0006fa370) (0xc000604dc0) Create stream\nI0207 13:00:56.988096    2748 log.go:172] (0xc0006fa370) (0xc000604dc0) Stream added, broadcasting: 3\nI0207 13:00:56.988998    2748 log.go:172] (0xc0006fa370) Reply frame received for 3\nI0207 13:00:56.989026    2748 log.go:172] (0xc0006fa370) (0xc000602000) Create stream\nI0207 13:00:56.989035    2748 log.go:172] (0xc0006fa370) (0xc000602000) Stream added, broadcasting: 5\nI0207 13:00:56.989888    2748 log.go:172] (0xc0006fa370) Reply frame received for 5\nI0207 13:00:57.385579    2748 log.go:172] (0xc0006fa370) Data frame received for 3\nI0207 13:00:57.385645    2748 log.go:172] (0xc000604dc0) (3) Data frame handling\nI0207 13:00:57.385668    2748 log.go:172] (0xc000604dc0) (3) Data frame sent\nI0207 13:00:57.559689    2748 log.go:172] (0xc0006fa370) Data frame received for 1\nI0207 13:00:57.560238    2748 log.go:172] (0xc0006fa370) (0xc000604dc0) Stream removed, broadcasting: 3\nI0207 13:00:57.560536    2748 log.go:172] (0xc00073a640) (1) Data frame handling\nI0207 13:00:57.560676    2748 log.go:172] (0xc00073a640) (1) Data frame sent\nI0207 13:00:57.560730    2748 log.go:172] (0xc0006fa370) (0xc00073a640) Stream removed, broadcasting: 1\nI0207 13:00:57.561488    2748 log.go:172] (0xc0006fa370) (0xc000602000) Stream removed, broadcasting: 5\nI0207 13:00:57.561574    2748 log.go:172] (0xc0006fa370) Go away received\nI0207 13:00:57.561846    2748 log.go:172] (0xc0006fa370) (0xc00073a640) Stream removed, broadcasting: 1\nI0207 13:00:57.561880    2748 log.go:172] (0xc0006fa370) (0xc000604dc0) Stream removed, broadcasting: 3\nI0207 13:00:57.561900    2748 log.go:172] (0xc0006fa370) (0xc000602000) Stream removed, broadcasting: 5\n"
Feb  7 13:00:57.571: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:00:57.572: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:00:57.651: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:00:57.651: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 13:00:57.711: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:00:57.711: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:00:57.711: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:00:57.711: INFO: 
Feb  7 13:00:57.711: INFO: StatefulSet ss has not reached scale 3, at 2
Feb  7 13:00:59.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962031792s
Feb  7 13:01:00.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.08595987s
Feb  7 13:01:01.775: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.043535188s
Feb  7 13:01:02.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.89819396s
Feb  7 13:01:03.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.87089017s
Feb  7 13:01:04.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.84451373s
Feb  7 13:01:07.085: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.838546595s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-99cgv
Feb  7 13:01:08.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:01:09.231: INFO: stderr: "I0207 13:01:08.540481    2770 log.go:172] (0xc0006cc0b0) (0xc0007005a0) Create stream\nI0207 13:01:08.540677    2770 log.go:172] (0xc0006cc0b0) (0xc0007005a0) Stream added, broadcasting: 1\nI0207 13:01:08.550900    2770 log.go:172] (0xc0006cc0b0) Reply frame received for 1\nI0207 13:01:08.551211    2770 log.go:172] (0xc0006cc0b0) (0xc000700640) Create stream\nI0207 13:01:08.551228    2770 log.go:172] (0xc0006cc0b0) (0xc000700640) Stream added, broadcasting: 3\nI0207 13:01:08.553746    2770 log.go:172] (0xc0006cc0b0) Reply frame received for 3\nI0207 13:01:08.553798    2770 log.go:172] (0xc0006cc0b0) (0xc000770dc0) Create stream\nI0207 13:01:08.553835    2770 log.go:172] (0xc0006cc0b0) (0xc000770dc0) Stream added, broadcasting: 5\nI0207 13:01:08.554734    2770 log.go:172] (0xc0006cc0b0) Reply frame received for 5\nI0207 13:01:09.054530    2770 log.go:172] (0xc0006cc0b0) Data frame received for 3\nI0207 13:01:09.054670    2770 log.go:172] (0xc000700640) (3) Data frame handling\nI0207 13:01:09.054697    2770 log.go:172] (0xc000700640) (3) Data frame sent\nI0207 13:01:09.218841    2770 log.go:172] (0xc0006cc0b0) (0xc000700640) Stream removed, broadcasting: 3\nI0207 13:01:09.219234    2770 log.go:172] (0xc0006cc0b0) Data frame received for 1\nI0207 13:01:09.219293    2770 log.go:172] (0xc0007005a0) (1) Data frame handling\nI0207 13:01:09.219354    2770 log.go:172] (0xc0007005a0) (1) Data frame sent\nI0207 13:01:09.219401    2770 log.go:172] (0xc0006cc0b0) (0xc0007005a0) Stream removed, broadcasting: 1\nI0207 13:01:09.219872    2770 log.go:172] (0xc0006cc0b0) (0xc000770dc0) Stream removed, broadcasting: 5\nI0207 13:01:09.219960    2770 log.go:172] (0xc0006cc0b0) Go away received\nI0207 13:01:09.220323    2770 log.go:172] (0xc0006cc0b0) (0xc0007005a0) Stream removed, broadcasting: 1\nI0207 13:01:09.220351    2770 log.go:172] (0xc0006cc0b0) (0xc000700640) Stream removed, broadcasting: 3\nI0207 13:01:09.220367    2770 log.go:172] (0xc0006cc0b0) (0xc000770dc0) Stream removed, broadcasting: 5\n"
Feb  7 13:01:09.231: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 13:01:09.231: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 13:01:09.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:01:09.852: INFO: rc: 1
Feb  7 13:01:09.852: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000c528a0 exit status 1   true [0xc001176700 0xc001176718 0xc001176730] [0xc001176700 0xc001176718 0xc001176730] [0xc001176710 0xc001176728] [0x935700 0x935700] 0xc0018a7680 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  7 13:01:19.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:01:20.406: INFO: stderr: "I0207 13:01:20.137182    2813 log.go:172] (0xc000712370) (0xc00073c640) Create stream\nI0207 13:01:20.137390    2813 log.go:172] (0xc000712370) (0xc00073c640) Stream added, broadcasting: 1\nI0207 13:01:20.142842    2813 log.go:172] (0xc000712370) Reply frame received for 1\nI0207 13:01:20.142900    2813 log.go:172] (0xc000712370) (0xc0000ecbe0) Create stream\nI0207 13:01:20.142937    2813 log.go:172] (0xc000712370) (0xc0000ecbe0) Stream added, broadcasting: 3\nI0207 13:01:20.144263    2813 log.go:172] (0xc000712370) Reply frame received for 3\nI0207 13:01:20.144327    2813 log.go:172] (0xc000712370) (0xc0000ecd20) Create stream\nI0207 13:01:20.144339    2813 log.go:172] (0xc000712370) (0xc0000ecd20) Stream added, broadcasting: 5\nI0207 13:01:20.145348    2813 log.go:172] (0xc000712370) Reply frame received for 5\nI0207 13:01:20.270218    2813 log.go:172] (0xc000712370) Data frame received for 5\nI0207 13:01:20.270348    2813 log.go:172] (0xc0000ecd20) (5) Data frame handling\nI0207 13:01:20.270395    2813 log.go:172] (0xc0000ecd20) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0207 13:01:20.270471    2813 log.go:172] (0xc000712370) Data frame received for 3\nI0207 13:01:20.270493    2813 log.go:172] (0xc0000ecbe0) (3) Data frame handling\nI0207 13:01:20.270522    2813 log.go:172] (0xc0000ecbe0) (3) Data frame sent\nI0207 13:01:20.394605    2813 log.go:172] (0xc000712370) Data frame received for 1\nI0207 13:01:20.394893    2813 log.go:172] (0xc000712370) (0xc0000ecbe0) Stream removed, broadcasting: 3\nI0207 13:01:20.395087    2813 log.go:172] (0xc00073c640) (1) Data frame handling\nI0207 13:01:20.395171    2813 log.go:172] (0xc00073c640) (1) Data frame sent\nI0207 13:01:20.395298    2813 log.go:172] (0xc000712370) (0xc0000ecd20) Stream removed, broadcasting: 5\nI0207 13:01:20.395394    2813 log.go:172] (0xc000712370) (0xc00073c640) Stream removed, broadcasting: 1\nI0207 13:01:20.395431    2813 log.go:172] (0xc000712370) Go away received\nI0207 13:01:20.395972    2813 log.go:172] (0xc000712370) (0xc00073c640) Stream removed, broadcasting: 1\nI0207 13:01:20.395985    2813 log.go:172] (0xc000712370) (0xc0000ecbe0) Stream removed, broadcasting: 3\nI0207 13:01:20.395990    2813 log.go:172] (0xc000712370) (0xc0000ecd20) Stream removed, broadcasting: 5\n"
Feb  7 13:01:20.407: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 13:01:20.407: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 13:01:20.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:01:20.923: INFO: stderr: "I0207 13:01:20.656723    2836 log.go:172] (0xc000704370) (0xc000722640) Create stream\nI0207 13:01:20.656981    2836 log.go:172] (0xc000704370) (0xc000722640) Stream added, broadcasting: 1\nI0207 13:01:20.661153    2836 log.go:172] (0xc000704370) Reply frame received for 1\nI0207 13:01:20.661206    2836 log.go:172] (0xc000704370) (0xc0007aebe0) Create stream\nI0207 13:01:20.661224    2836 log.go:172] (0xc000704370) (0xc0007aebe0) Stream added, broadcasting: 3\nI0207 13:01:20.662399    2836 log.go:172] (0xc000704370) Reply frame received for 3\nI0207 13:01:20.662436    2836 log.go:172] (0xc000704370) (0xc0007a6000) Create stream\nI0207 13:01:20.662452    2836 log.go:172] (0xc000704370) (0xc0007a6000) Stream added, broadcasting: 5\nI0207 13:01:20.663341    2836 log.go:172] (0xc000704370) Reply frame received for 5\nI0207 13:01:20.772625    2836 log.go:172] (0xc000704370) Data frame received for 3\nI0207 13:01:20.772667    2836 log.go:172] (0xc0007aebe0) (3) Data frame handling\nI0207 13:01:20.772691    2836 log.go:172] (0xc0007aebe0) (3) Data frame sent\nI0207 13:01:20.772733    2836 log.go:172] (0xc000704370) Data frame received for 5\nI0207 13:01:20.772754    2836 log.go:172] (0xc0007a6000) (5) Data frame handling\nI0207 13:01:20.772771    2836 log.go:172] (0xc0007a6000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0207 13:01:20.913087    2836 log.go:172] (0xc000704370) (0xc0007aebe0) Stream removed, broadcasting: 3\nI0207 13:01:20.913191    2836 log.go:172] (0xc000704370) Data frame received for 1\nI0207 13:01:20.913214    2836 log.go:172] (0xc000722640) (1) Data frame handling\nI0207 13:01:20.913242    2836 log.go:172] (0xc000722640) (1) Data frame sent\nI0207 13:01:20.913262    2836 log.go:172] (0xc000704370) (0xc000722640) Stream removed, broadcasting: 1\nI0207 13:01:20.913286    2836 log.go:172] (0xc000704370) (0xc0007a6000) Stream removed, broadcasting: 5\nI0207 13:01:20.913476    2836 log.go:172] (0xc000704370) (0xc000722640) Stream removed, broadcasting: 1\nI0207 13:01:20.913492    2836 log.go:172] (0xc000704370) (0xc0007aebe0) Stream removed, broadcasting: 3\nI0207 13:01:20.913498    2836 log.go:172] (0xc000704370) (0xc0007a6000) Stream removed, broadcasting: 5\n"
Feb  7 13:01:20.924: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  7 13:01:20.924: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  7 13:01:20.943: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:01:20.943: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  7 13:01:20.943: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  7 13:01:20.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:01:21.423: INFO: stderr: "I0207 13:01:21.150603    2858 log.go:172] (0xc0006ee2c0) (0xc000734780) Create stream\nI0207 13:01:21.150831    2858 log.go:172] (0xc0006ee2c0) (0xc000734780) Stream added, broadcasting: 1\nI0207 13:01:21.155524    2858 log.go:172] (0xc0006ee2c0) Reply frame received for 1\nI0207 13:01:21.155557    2858 log.go:172] (0xc0006ee2c0) (0xc0003a8500) Create stream\nI0207 13:01:21.155569    2858 log.go:172] (0xc0006ee2c0) (0xc0003a8500) Stream added, broadcasting: 3\nI0207 13:01:21.156734    2858 log.go:172] (0xc0006ee2c0) Reply frame received for 3\nI0207 13:01:21.156755    2858 log.go:172] (0xc0006ee2c0) (0xc0005bcc80) Create stream\nI0207 13:01:21.156765    2858 log.go:172] (0xc0006ee2c0) (0xc0005bcc80) Stream added, broadcasting: 5\nI0207 13:01:21.157658    2858 log.go:172] (0xc0006ee2c0) Reply frame received for 5\nI0207 13:01:21.255211    2858 log.go:172] (0xc0006ee2c0) Data frame received for 3\nI0207 13:01:21.255377    2858 log.go:172] (0xc0003a8500) (3) Data frame handling\nI0207 13:01:21.255401    2858 log.go:172] (0xc0003a8500) (3) Data frame sent\nI0207 13:01:21.411635    2858 log.go:172] (0xc0006ee2c0) (0xc0003a8500) Stream removed, broadcasting: 3\nI0207 13:01:21.412046    2858 log.go:172] (0xc0006ee2c0) Data frame received for 1\nI0207 13:01:21.412119    2858 log.go:172] (0xc000734780) (1) Data frame handling\nI0207 13:01:21.412192    2858 log.go:172] (0xc000734780) (1) Data frame sent\nI0207 13:01:21.412612    2858 log.go:172] (0xc0006ee2c0) (0xc000734780) Stream removed, broadcasting: 1\nI0207 13:01:21.413179    2858 log.go:172] (0xc0006ee2c0) (0xc0005bcc80) Stream removed, broadcasting: 5\nI0207 13:01:21.413275    2858 log.go:172] (0xc0006ee2c0) Go away received\nI0207 13:01:21.413338    2858 log.go:172] (0xc0006ee2c0) (0xc000734780) Stream removed, broadcasting: 1\nI0207 13:01:21.413433    2858 log.go:172] (0xc0006ee2c0) (0xc0003a8500) Stream removed, broadcasting: 3\nI0207 13:01:21.413454    2858 log.go:172] (0xc0006ee2c0) (0xc0005bcc80) Stream removed, broadcasting: 5\n"
Feb  7 13:01:21.423: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:01:21.423: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:01:21.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:01:22.029: INFO: stderr: "I0207 13:01:21.763675    2880 log.go:172] (0xc0008682c0) (0xc000687360) Create stream\nI0207 13:01:21.763832    2880 log.go:172] (0xc0008682c0) (0xc000687360) Stream added, broadcasting: 1\nI0207 13:01:21.767511    2880 log.go:172] (0xc0008682c0) Reply frame received for 1\nI0207 13:01:21.767552    2880 log.go:172] (0xc0008682c0) (0xc000524000) Create stream\nI0207 13:01:21.767578    2880 log.go:172] (0xc0008682c0) (0xc000524000) Stream added, broadcasting: 3\nI0207 13:01:21.768439    2880 log.go:172] (0xc0008682c0) Reply frame received for 3\nI0207 13:01:21.768460    2880 log.go:172] (0xc0008682c0) (0xc000496000) Create stream\nI0207 13:01:21.768486    2880 log.go:172] (0xc0008682c0) (0xc000496000) Stream added, broadcasting: 5\nI0207 13:01:21.769386    2880 log.go:172] (0xc0008682c0) Reply frame received for 5\nI0207 13:01:21.911650    2880 log.go:172] (0xc0008682c0) Data frame received for 3\nI0207 13:01:21.911694    2880 log.go:172] (0xc000524000) (3) Data frame handling\nI0207 13:01:21.911714    2880 log.go:172] (0xc000524000) (3) Data frame sent\nI0207 13:01:22.015340    2880 log.go:172] (0xc0008682c0) (0xc000524000) Stream removed, broadcasting: 3\nI0207 13:01:22.015716    2880 log.go:172] (0xc0008682c0) Data frame received for 1\nI0207 13:01:22.015757    2880 log.go:172] (0xc000687360) (1) Data frame handling\nI0207 13:01:22.015780    2880 log.go:172] (0xc000687360) (1) Data frame sent\nI0207 13:01:22.015911    2880 log.go:172] (0xc0008682c0) (0xc000496000) Stream removed, broadcasting: 5\nI0207 13:01:22.015986    2880 log.go:172] (0xc0008682c0) (0xc000687360) Stream removed, broadcasting: 1\nI0207 13:01:22.016037    2880 log.go:172] (0xc0008682c0) Go away received\nI0207 13:01:22.016412    2880 log.go:172] (0xc0008682c0) (0xc000687360) Stream removed, broadcasting: 1\nI0207 13:01:22.016447    2880 log.go:172] (0xc0008682c0) (0xc000524000) Stream removed, broadcasting: 3\nI0207 13:01:22.016468    2880 log.go:172] (0xc0008682c0) (0xc000496000) Stream removed, broadcasting: 5\n"
Feb  7 13:01:22.029: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:01:22.029: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:01:22.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  7 13:01:22.618: INFO: stderr: "I0207 13:01:22.231323    2902 log.go:172] (0xc00015c6e0) (0xc0006712c0) Create stream\nI0207 13:01:22.231551    2902 log.go:172] (0xc00015c6e0) (0xc0006712c0) Stream added, broadcasting: 1\nI0207 13:01:22.238961    2902 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0207 13:01:22.239034    2902 log.go:172] (0xc00015c6e0) (0xc0007b0000) Create stream\nI0207 13:01:22.239053    2902 log.go:172] (0xc00015c6e0) (0xc0007b0000) Stream added, broadcasting: 3\nI0207 13:01:22.240428    2902 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0207 13:01:22.240487    2902 log.go:172] (0xc00015c6e0) (0xc000642000) Create stream\nI0207 13:01:22.240502    2902 log.go:172] (0xc00015c6e0) (0xc000642000) Stream added, broadcasting: 5\nI0207 13:01:22.241528    2902 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0207 13:01:22.394493    2902 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0207 13:01:22.394539    2902 log.go:172] (0xc0007b0000) (3) Data frame handling\nI0207 13:01:22.394577    2902 log.go:172] (0xc0007b0000) (3) Data frame sent\nI0207 13:01:22.608454    2902 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0207 13:01:22.608651    2902 log.go:172] (0xc00015c6e0) (0xc0007b0000) Stream removed, broadcasting: 3\nI0207 13:01:22.608743    2902 log.go:172] (0xc0006712c0) (1) Data frame handling\nI0207 13:01:22.608760    2902 log.go:172] (0xc0006712c0) (1) Data frame sent\nI0207 13:01:22.608778    2902 log.go:172] (0xc00015c6e0) (0xc0006712c0) Stream removed, broadcasting: 1\nI0207 13:01:22.608814    2902 log.go:172] (0xc00015c6e0) (0xc000642000) Stream removed, broadcasting: 5\nI0207 13:01:22.608928    2902 log.go:172] (0xc00015c6e0) Go away received\nI0207 13:01:22.609077    2902 log.go:172] (0xc00015c6e0) (0xc0006712c0) Stream removed, broadcasting: 1\nI0207 13:01:22.609115    2902 log.go:172] (0xc00015c6e0) (0xc0007b0000) Stream removed, broadcasting: 3\nI0207 13:01:22.609129    2902 log.go:172] (0xc00015c6e0) (0xc000642000) Stream removed, broadcasting: 5\n"
Feb  7 13:01:22.618: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  7 13:01:22.618: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  7 13:01:22.618: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 13:01:22.636: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  7 13:01:32.661: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:01:32.661: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:01:32.661: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  7 13:01:32.726: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:32.726: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:32.726: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:32.726: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:32.726: INFO: 
Feb  7 13:01:32.726: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:33.752: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:33.752: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:33.752: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:33.752: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:33.752: INFO: 
Feb  7 13:01:33.752: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:35.151: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:35.151: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:35.151: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:35.151: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:35.151: INFO: 
Feb  7 13:01:35.151: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:36.199: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:36.199: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:36.199: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:36.199: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:36.199: INFO: 
Feb  7 13:01:36.199: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:37.217: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:37.217: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:37.217: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:37.217: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:37.217: INFO: 
Feb  7 13:01:37.217: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:38.270: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:38.270: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:38.270: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:38.270: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:38.270: INFO: 
Feb  7 13:01:38.270: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:39.289: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:39.289: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:39.289: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:39.289: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:39.289: INFO: 
Feb  7 13:01:39.289: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:40.469: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:40.470: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:40.470: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:40.470: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:40.470: INFO: 
Feb  7 13:01:40.470: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  7 13:01:41.493: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:41.493: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:41.493: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:57 +0000 UTC  }]
Feb  7 13:01:41.493: INFO: 
Feb  7 13:01:41.493: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  7 13:01:42.524: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb  7 13:01:42.524: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:01:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:00:37 +0000 UTC  }]
Feb  7 13:01:42.524: INFO: 
Feb  7 13:01:42.524: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-99cgv
Feb  7 13:01:43.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:01:43.815: INFO: rc: 1
Feb  7 13:01:43.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000f9ebd0 exit status 1   true [0xc000197ff0 0xc001c5e010 0xc001c5e028] [0xc000197ff0 0xc001c5e010 0xc001c5e028] [0xc001c5e008 0xc001c5e020] [0x935700 0x935700] 0xc001082900 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb  7 13:01:53.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:01:54.042: INFO: rc: 1
Feb  7 13:01:54.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a8de0 exit status 1   true [0xc0003e0b68 0xc0003e0be0 0xc0003e0c18] [0xc0003e0b68 0xc0003e0be0 0xc0003e0c18] [0xc0003e0ba0 0xc0003e0c00] [0x935700 0x935700] 0xc001b9a1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:02:04.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:02:04.254: INFO: rc: 1
Feb  7 13:02:04.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a9080 exit status 1   true [0xc0003e0c20 0xc0003e0c70 0xc0003e0cd8] [0xc0003e0c20 0xc0003e0c70 0xc0003e0cd8] [0xc0003e0c50 0xc0003e0cd0] [0x935700 0x935700] 0xc001b9baa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:02:14.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:02:14.391: INFO: rc: 1
Feb  7 13:02:14.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f9ef30 exit status 1   true [0xc001c5e030 0xc001c5e048 0xc001c5e060] [0xc001c5e030 0xc001c5e048 0xc001c5e060] [0xc001c5e040 0xc001c5e058] [0x935700 0x935700] 0xc001082d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:02:24.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:02:24.533: INFO: rc: 1
Feb  7 13:02:24.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0012a2c30 exit status 1   true [0xc001176850 0xc001176868 0xc001176880] [0xc001176850 0xc001176868 0xc001176880] [0xc001176860 0xc001176878] [0x935700 0x935700] 0xc001776180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:02:34.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:02:34.674: INFO: rc: 1
Feb  7 13:02:34.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a8000 exit status 1   true [0xc001c5e068 0xc001c5e080 0xc001c5e098] [0xc001c5e068 0xc001c5e080 0xc001c5e098] [0xc001c5e078 0xc001c5e090] [0x935700 0x935700] 0xc001b08000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:02:44.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:02:44.831: INFO: rc: 1
Feb  7 13:02:44.831: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ea6270 exit status 1   true [0xc0000e8150 0xc0003e0138 0xc0003e0188] [0xc0000e8150 0xc0003e0138 0xc0003e0188] [0xc0003e0108 0xc0003e0178] [0x935700 0x935700] 0xc001b9b860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:02:54.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:02:54.980: INFO: rc: 1
Feb  7 13:02:54.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00024b0e0 exit status 1   true [0xc000196cb8 0xc000196db8 0xc000196e60] [0xc000196cb8 0xc000196db8 0xc000196e60] [0xc000196d38 0xc000196e38] [0x935700 0x935700] 0xc0019fb8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:03:04.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:03:05.116: INFO: rc: 1
Feb  7 13:03:05.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00024b200 exit status 1   true [0xc000196e88 0xc000196fb0 0xc000196ff8] [0xc000196e88 0xc000196fb0 0xc000196ff8] [0xc000196f18 0xc000196fe0] [0x935700 0x935700] 0xc00104eea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:03:15.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:03:15.258: INFO: rc: 1
Feb  7 13:03:15.258: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00024b380 exit status 1   true [0xc000197000 0xc000197070 0xc0001970c8] [0xc000197000 0xc000197070 0xc0001970c8] [0xc000197050 0xc0001970a8] [0x935700 0x935700] 0xc001350240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:03:25.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:03:25.389: INFO: rc: 1
Feb  7 13:03:25.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00024b4d0 exit status 1   true [0xc000197100 0xc000197170 0xc0001971b8] [0xc000197100 0xc000197170 0xc0001971b8] [0xc000197158 0xc0001971a0] [0x935700 0x935700] 0xc0013517a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:03:35.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:03:35.558: INFO: rc: 1
Feb  7 13:03:35.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a81e0 exit status 1   true [0xc001748030 0xc0017480a0 0xc001748100] [0xc001748030 0xc0017480a0 0xc001748100] [0xc001748088 0xc0017480e0] [0x935700 0x935700] 0xc001b084e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:03:45.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:03:45.727: INFO: rc: 1
Feb  7 13:03:45.727: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a83c0 exit status 1   true [0xc001748120 0xc001748158 0xc0017481a8] [0xc001748120 0xc001748158 0xc0017481a8] [0xc001748148 0xc001748190] [0x935700 0x935700] 0xc001b08900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:03:55.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:03:55.894: INFO: rc: 1
Feb  7 13:03:55.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a8510 exit status 1   true [0xc0017481b8 0xc001748220 0xc001748238] [0xc0017481b8 0xc001748220 0xc001748238] [0xc0017481e8 0xc001748230] [0x935700 0x935700] 0xc001b08c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:04:05.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:04:06.091: INFO: rc: 1
Feb  7 13:04:06.091: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00024b620 exit status 1   true [0xc0001971e8 0xc000197238 0xc000197258] [0xc0001971e8 0xc000197238 0xc000197258] [0xc000197228 0xc000197250] [0x935700 0x935700] 0xc00180e660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:04:16.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:04:16.235: INFO: rc: 1
Feb  7 13:04:16.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ea63f0 exit status 1   true [0xc0003e0190 0xc0003e0218 0xc0003e02a8] [0xc0003e0190 0xc0003e0218 0xc0003e02a8] [0xc0003e01f0 0xc0003e02a0] [0x935700 0x935700] 0xc00169c000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:04:26.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:04:27.426: INFO: rc: 1
Feb  7 13:04:27.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a8690 exit status 1   true [0xc001748240 0xc001748258 0xc001748270] [0xc001748240 0xc001748258 0xc001748270] [0xc001748250 0xc001748268] [0x935700 0x935700] 0xc001b09020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:04:37.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:04:37.647: INFO: rc: 1
Feb  7 13:04:37.648: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ea62a0 exit status 1   true [0xc0000e8150 0xc0003e0160 0xc0003e0190] [0xc0000e8150 0xc0003e0160 0xc0003e0190] [0xc0003e0138 0xc0003e0188] [0x935700 0x935700] 0xc001350540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:04:47.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:04:47.807: INFO: rc: 1
Feb  7 13:04:47.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ea6420 exit status 1   true [0xc0003e01b0 0xc0003e0270 0xc0003e02c0] [0xc0003e01b0 0xc0003e0270 0xc0003e02c0] [0xc0003e0218 0xc0003e02a8] [0x935700 0x935700] 0xc0013519e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:04:57.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:04:57.935: INFO: rc: 1
Feb  7 13:04:57.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a81b0 exit status 1   true [0xc001748030 0xc0017480a0 0xc001748100] [0xc001748030 0xc0017480a0 0xc001748100] [0xc001748088 0xc0017480e0] [0x935700 0x935700] 0xc00104f2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:05:07.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:05:08.079: INFO: rc: 1
Feb  7 13:05:08.079: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a83f0 exit status 1   true [0xc001748120 0xc001748158 0xc0017481a8] [0xc001748120 0xc001748158 0xc0017481a8] [0xc001748148 0xc001748190] [0x935700 0x935700] 0xc0019fb380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:05:18.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:05:18.213: INFO: rc: 1
Feb  7 13:05:18.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00024b110 exit status 1   true [0xc000196c80 0xc000196d38 0xc000196e38] [0xc000196c80 0xc000196d38 0xc000196e38] [0xc000196cf0 0xc000196dd8] [0x935700 0x935700] 0xc001b9b860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:05:28.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:05:28.387: INFO: rc: 1
Feb  7 13:05:28.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0009a85a0 exit status 1   true [0xc0017481b8 0xc001748220 0xc001748238] [0xc0017481b8 0xc001748220 0xc001748238] [0xc0017481e8 0xc001748230] [0x935700 0x935700] 0xc00169c000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:05:38.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:05:38.573: INFO: rc: 1
Feb  7 13:05:38.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ea6750 exit status 1   true [0xc0003e02d0 0xc0003e02f0 0xc0003e0328] [0xc0003e02d0 0xc0003e02f0 0xc0003e0328] [0xc0003e02e8 0xc0003e0310] [0x935700 0x935700] 0xc001b08120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:05:48.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:05:48.733: INFO: rc: 1
Feb  7 13:05:48.733: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ea6900 exit status 1   true [0xc0003e0358 0xc0003e03a0 0xc0003e03f0] [0xc0003e0358 0xc0003e03a0 0xc0003e03f0] [0xc0003e0388 0xc0003e03b8] [0x935700 0x935700] 0xc001b08660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:05:58.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:05:58.859: INFO: rc: 1
Feb  7 13:05:58.859: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00024b2c0 exit status 1   true [0xc000196e60 0xc000196f18 0xc000196fe0] [0xc000196e60 0xc000196f18 0xc000196fe0] [0xc000196ec0 0xc000196fd0] [0x935700 0x935700] 0xc00180ef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:06:08.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:06:09.030: INFO: rc: 1
Feb  7 13:06:09.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000c52150 exit status 1   true [0xc001176000 0xc001176018 0xc001176038] [0xc001176000 0xc001176018 0xc001176038] [0xc001176010 0xc001176030] [0x935700 0x935700] 0xc0013fbf80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:06:19.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:06:19.188: INFO: rc: 1
Feb  7 13:06:19.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000c52270 exit status 1   true [0xc001176040 0xc001176058 0xc001176070] [0xc001176040 0xc001176058 0xc001176070] [0xc001176050 0xc001176068] [0x935700 0x935700] 0xc00194e180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:06:29.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:06:29.372: INFO: rc: 1
Feb  7 13:06:29.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000c523c0 exit status 1   true [0xc001176078 0xc001176090 0xc0011760a8] [0xc001176078 0xc001176090 0xc0011760a8] [0xc001176088 0xc0011760a0] [0x935700 0x935700] 0xc0018a61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:06:39.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:06:39.549: INFO: rc: 1
Feb  7 13:06:39.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d360f0 exit status 1   true [0xc00040e1f8 0xc00040e2f8 0xc00040e330] [0xc00040e1f8 0xc00040e2f8 0xc00040e330] [0xc00040e2e8 0xc00040e328] [0x935700 0x935700] 0xc001477f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb  7 13:06:49.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-99cgv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  7 13:06:49.669: INFO: rc: 1
Feb  7 13:06:49.669: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb  7 13:06:49.669: INFO: Scaling statefulset ss to 0
Feb  7 13:06:49.704: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb  7 13:06:49.707: INFO: Deleting all statefulset in ns e2e-tests-statefulset-99cgv
Feb  7 13:06:49.710: INFO: Scaling statefulset ss to 0
Feb  7 13:06:49.721: INFO: Waiting for statefulset status.replicas updated to 0
Feb  7 13:06:49.723: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:06:49.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-99cgv" for this suite.
Feb  7 13:06:57.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:06:57.914: INFO: namespace: e2e-tests-statefulset-99cgv, resource: bindings, ignored listing per whitelist
Feb  7 13:06:58.064: INFO: namespace e2e-tests-statefulset-99cgv deletion completed in 8.305901661s

• [SLOW TEST:382.024 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:06:58.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  7 13:06:58.386: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  7 13:07:03.404: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  7 13:07:11.454: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  7 13:07:13.476: INFO: Creating deployment "test-rollover-deployment"
Feb  7 13:07:13.539: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  7 13:07:15.695: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  7 13:07:15.712: INFO: Ensure that both replica sets have 1 created replica
Feb  7 13:07:15.721: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  7 13:07:15.738: INFO: Updating deployment test-rollover-deployment
Feb  7 13:07:15.738: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  7 13:07:18.006: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  7 13:07:18.016: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  7 13:07:18.022: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:18.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677636, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:20.045: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:20.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677636, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:22.044: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:22.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677636, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:24.229: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:24.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677636, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:26.051: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:26.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677636, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:28.070: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:28.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:30.060: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:30.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:32.045: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:32.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:34.144: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:34.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:36.041: INFO: all replica sets need to contain the pod-template-hash label
Feb  7 13:07:36.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:38.118: INFO: 
Feb  7 13:07:38.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677647, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716677633, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  7 13:07:40.064: INFO: 
Feb  7 13:07:40.065: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  7 13:07:40.080: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-rwqdr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rwqdr/deployments/test-rollover-deployment,UID:c1e9aa32-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869684,Generation:2,CreationTimestamp:2020-02-07 13:07:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-07 13:07:13 +0000 UTC 2020-02-07 13:07:13 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-07 13:07:38 +0000 UTC 2020-02-07 13:07:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  7 13:07:40.085: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-rwqdr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rwqdr/replicasets/test-rollover-deployment-5b8479fdb6,UID:c343e418-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869674,Generation:2,CreationTimestamp:2020-02-07 13:07:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c1e9aa32-49aa-11ea-a994-fa163e34d433 0xc001a71d97 0xc001a71d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  7 13:07:40.085: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  7 13:07:40.085: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-rwqdr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rwqdr/replicasets/test-rollover-controller,UID:b8cf917c-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869683,Generation:2,CreationTimestamp:2020-02-07 13:06:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c1e9aa32-49aa-11ea-a994-fa163e34d433 0xc001a70e4f 0xc001a70e60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 13:07:40.085: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-rwqdr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rwqdr/replicasets/test-rollover-deployment-58494b7559,UID:c206fb4b-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869638,Generation:2,CreationTimestamp:2020-02-07 13:07:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c1e9aa32-49aa-11ea-a994-fa163e34d433 0xc001a70f27 0xc001a70f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  7 13:07:40.101: INFO: Pod "test-rollover-deployment-5b8479fdb6-kh62f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-kh62f,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-rwqdr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rwqdr/pods/test-rollover-deployment-5b8479fdb6-kh62f,UID:c3a96127-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869659,Generation:0,CreationTimestamp:2020-02-07 13:07:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 c343e418-49aa-11ea-a994-fa163e34d433 0xc00250d297 0xc00250d298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kjdrn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kjdrn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-kjdrn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00250d3b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00250d3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:07:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:07:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:07:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-07 13:07:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-07 13:07:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-07 13:07:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0973eba7312a910c8a9d984be3b1413515d0d2e27646406d628736f88fca8f6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:07:40.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rwqdr" for this suite.
Feb  7 13:07:48.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:07:49.008: INFO: namespace: e2e-tests-deployment-rwqdr, resource: bindings, ignored listing per whitelist
Feb  7 13:07:49.268: INFO: namespace e2e-tests-deployment-rwqdr deletion completed in 9.156592343s

• [SLOW TEST:51.204 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:07:49.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  7 13:07:49.525: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869736,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 13:07:49.525: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869736,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  7 13:07:59.549: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869749,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  7 13:07:59.550: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869749,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  7 13:08:09.579: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869762,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 13:08:09.579: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869762,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  7 13:08:19.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869776,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  7 13:08:19.631: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-a,UID:d7644dfc-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869776,Generation:0,CreationTimestamp:2020-02-07 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  7 13:08:29.663: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-b,UID:ef4e81ba-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869788,Generation:0,CreationTimestamp:2020-02-07 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 13:08:29.663: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-b,UID:ef4e81ba-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869788,Generation:0,CreationTimestamp:2020-02-07 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  7 13:08:39.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-b,UID:ef4e81ba-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869801,Generation:0,CreationTimestamp:2020-02-07 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  7 13:08:39.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dhhsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-dhhsl/configmaps/e2e-watch-test-configmap-b,UID:ef4e81ba-49aa-11ea-a994-fa163e34d433,ResourceVersion:20869801,Generation:0,CreationTimestamp:2020-02-07 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:08:49.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-dhhsl" for this suite.
Feb  7 13:08:57.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:08:57.917: INFO: namespace: e2e-tests-watch-dhhsl, resource: bindings, ignored listing per whitelist
Feb  7 13:08:58.050: INFO: namespace e2e-tests-watch-dhhsl deletion completed in 8.267941787s

• [SLOW TEST:68.782 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:08:58.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  7 13:08:58.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-8pkx2'
Feb  7 13:09:00.231: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  7 13:09:00.232: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  7 13:09:00.251: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  7 13:09:00.334: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  7 13:09:00.394: INFO: scanned /root for discovery docs: 
Feb  7 13:09:00.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-8pkx2'
Feb  7 13:09:28.353: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  7 13:09:28.353: INFO: stdout: "Created e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884\nScaling up e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  7 13:09:28.353: INFO: stdout: "Created e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884\nScaling up e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  7 13:09:28.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8pkx2'
Feb  7 13:09:28.582: INFO: stderr: ""
Feb  7 13:09:28.582: INFO: stdout: "e2e-test-nginx-rc-bl26s e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884-rqlrs "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  7 13:09:33.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8pkx2'
Feb  7 13:09:33.731: INFO: stderr: ""
Feb  7 13:09:33.731: INFO: stdout: "e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884-rqlrs "
Feb  7 13:09:33.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884-rqlrs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8pkx2'
Feb  7 13:09:33.883: INFO: stderr: ""
Feb  7 13:09:33.883: INFO: stdout: "true"
Feb  7 13:09:33.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884-rqlrs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8pkx2'
Feb  7 13:09:34.116: INFO: stderr: ""
Feb  7 13:09:34.117: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  7 13:09:34.117: INFO: e2e-test-nginx-rc-c031a79e81c5b90c378c46b931cf5884-rqlrs is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb  7 13:09:34.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8pkx2'
Feb  7 13:09:34.229: INFO: stderr: ""
Feb  7 13:09:34.229: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:09:34.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8pkx2" for this suite.
Feb  7 13:09:46.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:09:46.537: INFO: namespace: e2e-tests-kubectl-8pkx2, resource: bindings, ignored listing per whitelist
Feb  7 13:09:46.793: INFO: namespace e2e-tests-kubectl-8pkx2 deletion completed in 12.555344773s

• [SLOW TEST:48.742 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:09:46.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb  7 13:09:59.773: INFO: Successfully updated pod "annotationupdate1d725906-49ab-11ea-abae-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:10:01.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hn6cw" for this suite.
Feb  7 13:10:25.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:10:26.056: INFO: namespace: e2e-tests-downward-api-hn6cw, resource: bindings, ignored listing per whitelist
Feb  7 13:10:26.261: INFO: namespace e2e-tests-downward-api-hn6cw deletion completed in 24.365570765s

• [SLOW TEST:39.467 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:10:26.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  7 13:10:41.245: INFO: Successfully updated pod "pod-update-34f6df39-49ab-11ea-abae-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Feb  7 13:10:41.400: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:10:41.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-twdt9" for this suite.
Feb  7 13:11:05.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:11:05.679: INFO: namespace: e2e-tests-pods-twdt9, resource: bindings, ignored listing per whitelist
Feb  7 13:11:05.712: INFO: namespace e2e-tests-pods-twdt9 deletion completed in 24.29461708s

• [SLOW TEST:39.451 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:11:05.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:11:05.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-2jfgv" to be "success or failure"
Feb  7 13:11:05.997: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.94165ms
Feb  7 13:11:08.021: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061554136s
Feb  7 13:11:10.059: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09956492s
Feb  7 13:11:12.905: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.945928007s
Feb  7 13:11:15.541: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.581662484s
Feb  7 13:11:17.552: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.592508479s
Feb  7 13:11:19.697: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.737563787s
Feb  7 13:11:22.218: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.258766388s
STEP: Saw pod success
Feb  7 13:11:22.218: INFO: Pod "downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 13:11:22.269: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 13:11:23.124: INFO: Waiting for pod downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005 to disappear
Feb  7 13:11:23.173: INFO: Pod downwardapi-volume-4c75c2ac-49ab-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:11:23.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2jfgv" for this suite.
Feb  7 13:11:31.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:11:31.406: INFO: namespace: e2e-tests-projected-2jfgv, resource: bindings, ignored listing per whitelist
Feb  7 13:11:31.827: INFO: namespace e2e-tests-projected-2jfgv deletion completed in 8.639912534s

• [SLOW TEST:26.115 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:11:31.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  7 13:11:32.128: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005" in namespace "e2e-tests-projected-qqqvl" to be "success or failure"
Feb  7 13:11:32.175: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.069066ms
Feb  7 13:11:34.533: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405013752s
Feb  7 13:11:36.567: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438490947s
Feb  7 13:11:38.611: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482605871s
Feb  7 13:11:40.645: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516461265s
Feb  7 13:11:42.663: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.534906396s
Feb  7 13:11:44.688: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.559058017s
STEP: Saw pod success
Feb  7 13:11:44.688: INFO: Pod "downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 13:11:44.702: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005 container client-container: 
STEP: delete the pod
Feb  7 13:11:44.793: INFO: Waiting for pod downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005 to disappear
Feb  7 13:11:44.800: INFO: Pod downwardapi-volume-5c10aead-49ab-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:11:44.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qqqvl" for this suite.
Feb  7 13:11:50.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:11:50.991: INFO: namespace: e2e-tests-projected-qqqvl, resource: bindings, ignored listing per whitelist
Feb  7 13:11:51.023: INFO: namespace e2e-tests-projected-qqqvl deletion completed in 6.214628167s

• [SLOW TEST:19.196 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:11:51.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-6787c347-49ab-11ea-abae-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  7 13:11:51.403: INFO: Waiting up to 5m0s for pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005" in namespace "e2e-tests-secrets-qlsxk" to be "success or failure"
Feb  7 13:11:51.767: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 363.916372ms
Feb  7 13:11:53.799: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3964444s
Feb  7 13:11:55.814: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410964719s
Feb  7 13:11:57.850: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446892144s
Feb  7 13:11:59.907: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504232838s
Feb  7 13:12:01.947: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.544628856s
Feb  7 13:12:03.962: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.559427802s
STEP: Saw pod success
Feb  7 13:12:03.962: INFO: Pod "pod-secrets-678c0408-49ab-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 13:12:03.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-678c0408-49ab-11ea-abae-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  7 13:12:04.156: INFO: Waiting for pod pod-secrets-678c0408-49ab-11ea-abae-0242ac110005 to disappear
Feb  7 13:12:04.628: INFO: Pod pod-secrets-678c0408-49ab-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:12:04.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qlsxk" for this suite.
Feb  7 13:12:12.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:12:13.161: INFO: namespace: e2e-tests-secrets-qlsxk, resource: bindings, ignored listing per whitelist
Feb  7 13:12:13.177: INFO: namespace e2e-tests-secrets-qlsxk deletion completed in 8.369237887s

• [SLOW TEST:22.154 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:12:13.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb  7 13:12:13.500: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:12:13.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-47h55" for this suite.
Feb  7 13:12:19.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:12:19.728: INFO: namespace: e2e-tests-kubectl-47h55, resource: bindings, ignored listing per whitelist
Feb  7 13:12:19.895: INFO: namespace e2e-tests-kubectl-47h55 deletion completed in 6.250712564s

• [SLOW TEST:6.717 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:12:19.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb  7 13:12:20.092: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  7 13:12:20.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:12:20.522: INFO: stderr: ""
Feb  7 13:12:20.522: INFO: stdout: "service/redis-slave created\n"
Feb  7 13:12:20.523: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  7 13:12:20.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:12:20.939: INFO: stderr: ""
Feb  7 13:12:20.939: INFO: stdout: "service/redis-master created\n"
Feb  7 13:12:20.939: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  7 13:12:20.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:12:21.379: INFO: stderr: ""
Feb  7 13:12:21.380: INFO: stdout: "service/frontend created\n"
Feb  7 13:12:21.381: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  7 13:12:21.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:12:21.778: INFO: stderr: ""
Feb  7 13:12:21.778: INFO: stdout: "deployment.extensions/frontend created\n"
Feb  7 13:12:21.779: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  7 13:12:21.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:12:22.288: INFO: stderr: ""
Feb  7 13:12:22.289: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb  7 13:12:22.289: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  7 13:12:22.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:12:23.089: INFO: stderr: ""
Feb  7 13:12:23.089: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb  7 13:12:23.089: INFO: Waiting for all frontend pods to be Running.
Feb  7 13:13:03.143: INFO: Waiting for frontend to serve content.
Feb  7 13:13:05.799: INFO: Trying to add a new entry to the guestbook.
Feb  7 13:13:05.902: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  7 13:13:05.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:13:06.779: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 13:13:06.779: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 13:13:06.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:13:07.004: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 13:13:07.005: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 13:13:07.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:13:07.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 13:13:07.220: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 13:13:07.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:13:07.377: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 13:13:07.377: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 13:13:07.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:13:07.590: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 13:13:07.590: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  7 13:13:07.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fnhzh'
Feb  7 13:13:08.193: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  7 13:13:08.193: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:13:08.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fnhzh" for this suite.
Feb  7 13:13:54.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:13:54.779: INFO: namespace: e2e-tests-kubectl-fnhzh, resource: bindings, ignored listing per whitelist
Feb  7 13:13:54.781: INFO: namespace e2e-tests-kubectl-fnhzh deletion completed in 46.447646968s

• [SLOW TEST:94.886 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:13:54.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:13:54.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nn7bj" for this suite.
Feb  7 13:14:19.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:14:19.226: INFO: namespace: e2e-tests-pods-nn7bj, resource: bindings, ignored listing per whitelist
Feb  7 13:14:19.261: INFO: namespace e2e-tests-pods-nn7bj deletion completed in 24.290724837s

• [SLOW TEST:24.480 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:14:19.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  7 13:14:36.827: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:14:37.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-bw2jk" for this suite.
Feb  7 13:15:03.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:15:04.272: INFO: namespace: e2e-tests-replicaset-bw2jk, resource: bindings, ignored listing per whitelist
Feb  7 13:15:04.326: INFO: namespace e2e-tests-replicaset-bw2jk deletion completed in 26.400892562s

• [SLOW TEST:45.064 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:15:04.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  7 13:15:04.818: INFO: Waiting up to 5m0s for pod "pod-dac049f7-49ab-11ea-abae-0242ac110005" in namespace "e2e-tests-emptydir-bg9b8" to be "success or failure"
Feb  7 13:15:04.843: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.395577ms
Feb  7 13:15:07.023: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204921556s
Feb  7 13:15:09.039: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220208023s
Feb  7 13:15:11.068: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249691507s
Feb  7 13:15:13.089: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270311248s
Feb  7 13:15:15.148: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.329414856s
Feb  7 13:15:17.606: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.787279747s
STEP: Saw pod success
Feb  7 13:15:17.606: INFO: Pod "pod-dac049f7-49ab-11ea-abae-0242ac110005" satisfied condition "success or failure"
Feb  7 13:15:17.624: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-dac049f7-49ab-11ea-abae-0242ac110005 container test-container: 
STEP: delete the pod
Feb  7 13:15:18.337: INFO: Waiting for pod pod-dac049f7-49ab-11ea-abae-0242ac110005 to disappear
Feb  7 13:15:18.688: INFO: Pod pod-dac049f7-49ab-11ea-abae-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:15:18.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bg9b8" for this suite.
Feb  7 13:15:24.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:15:24.846: INFO: namespace: e2e-tests-emptydir-bg9b8, resource: bindings, ignored listing per whitelist
Feb  7 13:15:24.956: INFO: namespace e2e-tests-emptydir-bg9b8 deletion completed in 6.231999903s

• [SLOW TEST:20.629 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:15:24.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r92jg
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  7 13:15:25.227: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  7 13:15:59.583: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r92jg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  7 13:15:59.583: INFO: >>> kubeConfig: /root/.kube/config
I0207 13:15:59.665545       9 log.go:172] (0xc001b964d0) (0xc00181b720) Create stream
I0207 13:15:59.665672       9 log.go:172] (0xc001b964d0) (0xc00181b720) Stream added, broadcasting: 1
I0207 13:15:59.672370       9 log.go:172] (0xc001b964d0) Reply frame received for 1
I0207 13:15:59.672429       9 log.go:172] (0xc001b964d0) (0xc001abafa0) Create stream
I0207 13:15:59.672445       9 log.go:172] (0xc001b964d0) (0xc001abafa0) Stream added, broadcasting: 3
I0207 13:15:59.673438       9 log.go:172] (0xc001b964d0) Reply frame received for 3
I0207 13:15:59.673466       9 log.go:172] (0xc001b964d0) (0xc00181b860) Create stream
I0207 13:15:59.673482       9 log.go:172] (0xc001b964d0) (0xc00181b860) Stream added, broadcasting: 5
I0207 13:15:59.674272       9 log.go:172] (0xc001b964d0) Reply frame received for 5
I0207 13:16:00.829507       9 log.go:172] (0xc001b964d0) Data frame received for 3
I0207 13:16:00.829559       9 log.go:172] (0xc001abafa0) (3) Data frame handling
I0207 13:16:00.829585       9 log.go:172] (0xc001abafa0) (3) Data frame sent
I0207 13:16:01.097703       9 log.go:172] (0xc001b964d0) Data frame received for 1
I0207 13:16:01.097769       9 log.go:172] (0xc001b964d0) (0xc001abafa0) Stream removed, broadcasting: 3
I0207 13:16:01.097820       9 log.go:172] (0xc00181b720) (1) Data frame handling
I0207 13:16:01.097844       9 log.go:172] (0xc00181b720) (1) Data frame sent
I0207 13:16:01.097886       9 log.go:172] (0xc001b964d0) (0xc00181b860) Stream removed, broadcasting: 5
I0207 13:16:01.097937       9 log.go:172] (0xc001b964d0) (0xc00181b720) Stream removed, broadcasting: 1
I0207 13:16:01.097972       9 log.go:172] (0xc001b964d0) Go away received
I0207 13:16:01.098092       9 log.go:172] (0xc001b964d0) (0xc00181b720) Stream removed, broadcasting: 1
I0207 13:16:01.098110       9 log.go:172] (0xc001b964d0) (0xc001abafa0) Stream removed, broadcasting: 3
I0207 13:16:01.098122       9 log.go:172] (0xc001b964d0) (0xc00181b860) Stream removed, broadcasting: 5
Feb  7 13:16:01.098: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:16:01.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-r92jg" for this suite.
Feb  7 13:16:25.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:16:25.200: INFO: namespace: e2e-tests-pod-network-test-r92jg, resource: bindings, ignored listing per whitelist
Feb  7 13:16:25.326: INFO: namespace e2e-tests-pod-network-test-r92jg deletion completed in 24.214438526s

• [SLOW TEST:60.370 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  7 13:16:25.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-pb582
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-pb582
STEP: Deleting pre-stop pod
Feb  7 13:16:54.718: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  7 13:16:54.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-pb582" for this suite.
Feb  7 13:17:37.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  7 13:17:37.248: INFO: namespace: e2e-tests-prestop-pb582, resource: bindings, ignored listing per whitelist
Feb  7 13:17:37.265: INFO: namespace e2e-tests-prestop-pb582 deletion completed in 42.445418141s

• [SLOW TEST:71.939 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSFeb  7 13:17:37.266: INFO: Running AfterSuite actions on all nodes
Feb  7 13:17:37.266: INFO: Running AfterSuite actions on node 1
Feb  7 13:17:37.266: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9022.006 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS