I1220 10:47:14.773160 8 e2e.go:224] Starting e2e run "1524308f-2316-11ea-851f-0242ac110004" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576838833 - Will randomize all specs Will run 201 of 2164 specs Dec 20 10:47:15.145: INFO: >>> kubeConfig: /root/.kube/config Dec 20 10:47:15.150: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 20 10:47:15.175: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 20 10:47:15.243: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 20 10:47:15.243: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 20 10:47:15.243: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 20 10:47:15.255: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 20 10:47:15.255: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 20 10:47:15.255: INFO: e2e test version: v1.13.12 Dec 20 10:47:15.256: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 20 10:47:15.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container Dec 20 10:47:15.390: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 20 10:47:15.395: INFO: PodSpec: initContainers in spec.initContainers Dec 20 10:48:17.924: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1605e329-2316-11ea-851f-0242ac110004", GenerateName:"", Namespace:"e2e-tests-init-container-l4gtw", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-l4gtw/pods/pod-init-1605e329-2316-11ea-851f-0242ac110004", UID:"16077df2-2316-11ea-a994-fa163e34d433", ResourceVersion:"15443685", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712435635, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"395886115"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dwqtj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0010b8bc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dwqtj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dwqtj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dwqtj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ff2128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00084eba0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ff21a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ff21c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ff21c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ff21cc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712435635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712435635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712435635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712435635, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000ca1500), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f55960)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f559d0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d6fb9e0b5bfcc0db5cfd7cab00adf27cefe7601e3afdedaa9bb1a8634ebc676d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ca1540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ca1520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 20 10:48:17.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-l4gtw" for this suite. Dec 20 10:48:42.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 10:48:42.266: INFO: namespace: e2e-tests-init-container-l4gtw, resource: bindings, ignored listing per whitelist Dec 20 10:48:42.388: INFO: namespace e2e-tests-init-container-l4gtw deletion completed in 24.27685523s • [SLOW TEST:87.132 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 20 10:48:42.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 20 10:48:46.145: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4c02bbaa-2316-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0012deaa2), BlockOwnerDeletion:(*bool)(0xc0012deaa3)}} Dec 20 10:48:46.220: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4bdd893e-2316-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0011c5b62), BlockOwnerDeletion:(*bool)(0xc0011c5b63)}} Dec 20 10:48:46.413: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4bff7c42-2316-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0012dec42), BlockOwnerDeletion:(*bool)(0xc0012dec43)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 20 10:48:51.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lj74x" for this suite. Dec 20 10:48:57.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 10:48:57.810: INFO: namespace: e2e-tests-gc-lj74x, resource: bindings, ignored listing per whitelist Dec 20 10:48:57.835: INFO: namespace e2e-tests-gc-lj74x deletion completed in 6.288993767s • [SLOW TEST:15.446 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 20 10:48:57.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-5331fba3-2316-11ea-851f-0242ac110004 STEP: Creating a pod to test consume secrets Dec 20 10:48:58.041: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-qk8bb" to be "success or failure" Dec 20 10:48:58.048: INFO: Pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.91227ms Dec 20 10:49:00.064: INFO: Pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022786809s Dec 20 10:49:02.077: INFO: Pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036442507s Dec 20 10:49:04.193: INFO: Pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152315198s Dec 20 10:49:06.214: INFO: Pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17315475s Dec 20 10:49:08.225: INFO: Pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184230067s STEP: Saw pod success Dec 20 10:49:08.225: INFO: Pod "pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004" satisfied condition "success or failure" Dec 20 10:49:08.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 20 10:49:08.722: INFO: Waiting for pod pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004 to disappear Dec 20 10:49:09.160: INFO: Pod pod-projected-secrets-5332aa5b-2316-11ea-851f-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 20 10:49:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qk8bb" for this suite. Dec 20 10:49:15.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 10:49:15.480: INFO: namespace: e2e-tests-projected-qk8bb, resource: bindings, ignored listing per whitelist Dec 20 10:49:15.583: INFO: namespace e2e-tests-projected-qk8bb deletion completed in 6.314043386s • [SLOW TEST:17.747 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 20 10:49:15.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 20 10:49:15.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tr8m4' Dec 20 10:49:18.404: INFO: stderr: "" Dec 20 10:49:18.404: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Dec 20 10:49:18.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-tr8m4' Dec 20 10:49:25.265: INFO: stderr: "" Dec 20 10:49:25.266: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 20 10:49:25.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tr8m4" for this suite. Dec 20 10:49:31.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 20 10:49:31.613: INFO: namespace: e2e-tests-kubectl-tr8m4, resource: bindings, ignored listing per whitelist Dec 20 10:49:31.633: INFO: namespace e2e-tests-kubectl-tr8m4 deletion completed in 6.327387237s • [SLOW TEST:16.049 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 20 10:49:31.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 20 10:49:31.803: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 20.364903ms)
Dec 20 10:49:31.809: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.800358ms)
Dec 20 10:49:31.814: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.001494ms)
Dec 20 10:49:31.824: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.255226ms)
Dec 20 10:49:31.829: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.884873ms)
Dec 20 10:49:31.834: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.392774ms)
Dec 20 10:49:31.881: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 46.912795ms)
Dec 20 10:49:31.890: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.009561ms)
Dec 20 10:49:31.895: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.51352ms)
Dec 20 10:49:31.901: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.491664ms)
Dec 20 10:49:31.907: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.279652ms)
Dec 20 10:49:31.913: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.728289ms)
Dec 20 10:49:31.918: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.699728ms)
Dec 20 10:49:31.923: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.691712ms)
Dec 20 10:49:31.931: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.599251ms)
Dec 20 10:49:31.935: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.362628ms)
Dec 20 10:49:31.941: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.801838ms)
Dec 20 10:49:31.946: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.352933ms)
Dec 20 10:49:31.953: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.073113ms)
Dec 20 10:49:31.958: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.118599ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:49:31.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-gsr6d" for this suite.
Dec 20 10:49:38.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:49:38.157: INFO: namespace: e2e-tests-proxy-gsr6d, resource: bindings, ignored listing per whitelist
Dec 20 10:49:38.260: INFO: namespace e2e-tests-proxy-gsr6d deletion completed in 6.297686694s

• [SLOW TEST:6.627 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:49:38.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-6b4ca939-2316-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 10:49:38.515: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-2qfrz" to be "success or failure"
Dec 20 10:49:38.537: INFO: Pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.307729ms
Dec 20 10:49:40.571: INFO: Pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056370551s
Dec 20 10:49:42.599: INFO: Pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083783078s
Dec 20 10:49:44.624: INFO: Pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108616836s
Dec 20 10:49:46.644: INFO: Pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129407591s
Dec 20 10:49:48.700: INFO: Pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184723715s
STEP: Saw pod success
Dec 20 10:49:48.700: INFO: Pod "pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:49:48.736: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 10:49:48.974: INFO: Waiting for pod pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:49:48.986: INFO: Pod pod-projected-configmaps-6b4fee58-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:49:48.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2qfrz" for this suite.
Dec 20 10:49:55.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:49:55.245: INFO: namespace: e2e-tests-projected-2qfrz, resource: bindings, ignored listing per whitelist
Dec 20 10:49:55.303: INFO: namespace e2e-tests-projected-2qfrz deletion completed in 6.3057874s

• [SLOW TEST:17.043 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:49:55.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 20 10:50:03.671: INFO: Pod pod-hostip-757754d1-2316-11ea-851f-0242ac110004 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:50:03.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9kqq4" for this suite.
Dec 20 10:50:31.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:50:31.788: INFO: namespace: e2e-tests-pods-9kqq4, resource: bindings, ignored listing per whitelist
Dec 20 10:50:31.896: INFO: namespace e2e-tests-pods-9kqq4 deletion completed in 28.218562553s

• [SLOW TEST:36.592 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:50:31.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-8b3fc7eb-2316-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 10:50:32.097: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-kfn78" to be "success or failure"
Dec 20 10:50:32.178: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 80.775077ms
Dec 20 10:50:34.213: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115698617s
Dec 20 10:50:36.239: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142389013s
Dec 20 10:50:41.239: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.142234658s
Dec 20 10:50:43.253: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.156224388s
Dec 20 10:50:45.267: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 13.169532476s
Dec 20 10:50:47.360: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.2625247s
STEP: Saw pod success
Dec 20 10:50:47.360: INFO: Pod "pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:50:47.370: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 10:50:47.732: INFO: Waiting for pod pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:50:47.743: INFO: Pod pod-projected-secrets-8b4053cd-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:50:47.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kfn78" for this suite.
Dec 20 10:50:53.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:50:53.954: INFO: namespace: e2e-tests-projected-kfn78, resource: bindings, ignored listing per whitelist
Dec 20 10:50:54.058: INFO: namespace e2e-tests-projected-kfn78 deletion completed in 6.30471166s

• [SLOW TEST:22.161 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:50:54.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-9887fcbb-2316-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 10:50:54.379: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-prhpv" to be "success or failure"
Dec 20 10:50:54.394: INFO: Pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.380312ms
Dec 20 10:50:56.406: INFO: Pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026069849s
Dec 20 10:50:58.463: INFO: Pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083493243s
Dec 20 10:51:01.497: INFO: Pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.117815886s
Dec 20 10:51:03.523: INFO: Pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.143428138s
Dec 20 10:51:05.536: INFO: Pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.156482181s
STEP: Saw pod success
Dec 20 10:51:05.536: INFO: Pod "pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:51:05.545: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 10:51:06.091: INFO: Waiting for pod pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:51:06.536: INFO: Pod pod-projected-configmaps-9888a0d1-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:51:06.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-prhpv" for this suite.
Dec 20 10:51:14.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:51:14.818: INFO: namespace: e2e-tests-projected-prhpv, resource: bindings, ignored listing per whitelist
Dec 20 10:51:14.932: INFO: namespace e2e-tests-projected-prhpv deletion completed in 8.350843987s

• [SLOW TEST:20.874 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:51:14.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-a4e28d53-2316-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 10:51:15.231: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-txddf" to be "success or failure"
Dec 20 10:51:15.254: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.118956ms
Dec 20 10:51:17.280: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048819184s
Dec 20 10:51:19.293: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06156754s
Dec 20 10:51:21.504: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273313092s
Dec 20 10:51:23.523: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.291824514s
Dec 20 10:51:25.538: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306486631s
Dec 20 10:51:27.548: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.317006165s
STEP: Saw pod success
Dec 20 10:51:27.548: INFO: Pod "pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:51:27.553: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 10:51:29.055: INFO: Waiting for pod pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:51:29.067: INFO: Pod pod-projected-configmaps-a4f16dea-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:51:29.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-txddf" for this suite.
Dec 20 10:51:35.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:51:35.470: INFO: namespace: e2e-tests-projected-txddf, resource: bindings, ignored listing per whitelist
Dec 20 10:51:35.592: INFO: namespace e2e-tests-projected-txddf deletion completed in 6.514207389s

• [SLOW TEST:20.659 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:51:35.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-b1421e5f-2316-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 10:51:35.881: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-jcjcv" to be "success or failure"
Dec 20 10:51:36.050: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 168.794762ms
Dec 20 10:51:38.082: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201378978s
Dec 20 10:51:40.106: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224748763s
Dec 20 10:51:42.404: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.52270377s
Dec 20 10:51:44.434: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553047079s
Dec 20 10:51:46.659: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.777828397s
Dec 20 10:51:48.673: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.79162305s
STEP: Saw pod success
Dec 20 10:51:48.673: INFO: Pod "pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:51:48.681: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 10:51:49.063: INFO: Waiting for pod pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:51:49.079: INFO: Pod pod-projected-secrets-b1446fea-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:51:49.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jcjcv" for this suite.
Dec 20 10:51:55.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:51:55.250: INFO: namespace: e2e-tests-projected-jcjcv, resource: bindings, ignored listing per whitelist
Dec 20 10:51:55.282: INFO: namespace e2e-tests-projected-jcjcv deletion completed in 6.1927968s

• [SLOW TEST:19.689 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:51:55.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 10:51:55.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-69mmm" to be "success or failure"
Dec 20 10:51:55.472: INFO: Pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.433912ms
Dec 20 10:51:57.491: INFO: Pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032007402s
Dec 20 10:51:59.510: INFO: Pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050836187s
Dec 20 10:52:01.534: INFO: Pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074775746s
Dec 20 10:52:03.549: INFO: Pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089658616s
Dec 20 10:52:05.561: INFO: Pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102300142s
STEP: Saw pod success
Dec 20 10:52:05.562: INFO: Pod "downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:52:05.566: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 10:52:05.681: INFO: Waiting for pod downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:52:06.653: INFO: Pod downwardapi-volume-bcf29c8f-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:52:06.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-69mmm" for this suite.
Dec 20 10:52:12.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:52:13.067: INFO: namespace: e2e-tests-downward-api-69mmm, resource: bindings, ignored listing per whitelist
Dec 20 10:52:13.088: INFO: namespace e2e-tests-downward-api-69mmm deletion completed in 6.347810503s

• [SLOW TEST:17.806 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:52:13.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 20 10:52:13.314: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:52:30.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-bcg78" for this suite.
Dec 20 10:52:37.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:52:37.164: INFO: namespace: e2e-tests-init-container-bcg78, resource: bindings, ignored listing per whitelist
Dec 20 10:52:37.199: INFO: namespace e2e-tests-init-container-bcg78 deletion completed in 6.224898701s

• [SLOW TEST:24.111 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:52:37.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 20 10:52:37.403: INFO: Waiting up to 5m0s for pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-qj5ql" to be "success or failure"
Dec 20 10:52:37.465: INFO: Pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 61.793319ms
Dec 20 10:52:39.491: INFO: Pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087734813s
Dec 20 10:52:41.505: INFO: Pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101683432s
Dec 20 10:52:43.666: INFO: Pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262258439s
Dec 20 10:52:45.683: INFO: Pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.279232993s
Dec 20 10:52:48.148: INFO: Pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.74444351s
STEP: Saw pod success
Dec 20 10:52:48.148: INFO: Pod "pod-d5f0cd3c-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:52:48.158: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d5f0cd3c-2316-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 10:52:48.579: INFO: Waiting for pod pod-d5f0cd3c-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:52:48.596: INFO: Pod pod-d5f0cd3c-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:52:48.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qj5ql" for this suite.
Dec 20 10:52:56.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:52:56.901: INFO: namespace: e2e-tests-emptydir-qj5ql, resource: bindings, ignored listing per whitelist
Dec 20 10:52:56.907: INFO: namespace e2e-tests-emptydir-qj5ql deletion completed in 8.300900708s

• [SLOW TEST:19.707 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:52:56.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:53:07.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9bl2h" for this suite.
Dec 20 10:53:13.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:53:13.756: INFO: namespace: e2e-tests-emptydir-wrapper-9bl2h, resource: bindings, ignored listing per whitelist
Dec 20 10:53:13.831: INFO: namespace e2e-tests-emptydir-wrapper-9bl2h deletion completed in 6.45254417s

• [SLOW TEST:16.924 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:53:13.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 10:53:14.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-c4fzp" to be "success or failure"
Dec 20 10:53:14.111: INFO: Pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.983375ms
Dec 20 10:53:16.268: INFO: Pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161613822s
Dec 20 10:53:18.282: INFO: Pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175993067s
Dec 20 10:53:20.354: INFO: Pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247589868s
Dec 20 10:53:22.392: INFO: Pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285723738s
Dec 20 10:53:24.495: INFO: Pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.38879443s
STEP: Saw pod success
Dec 20 10:53:24.495: INFO: Pod "downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:53:24.517: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 10:53:24.855: INFO: Waiting for pod downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004 to disappear
Dec 20 10:53:24.869: INFO: Pod downwardapi-volume-ebd2a065-2316-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:53:24.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-c4fzp" for this suite.
Dec 20 10:53:31.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:53:31.288: INFO: namespace: e2e-tests-downward-api-c4fzp, resource: bindings, ignored listing per whitelist
Dec 20 10:53:31.311: INFO: namespace e2e-tests-downward-api-c4fzp deletion completed in 6.425140202s

• [SLOW TEST:17.480 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:53:31.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-f6394666-2316-11ea-851f-0242ac110004
STEP: Creating secret with name s-test-opt-upd-f6394861-2316-11ea-851f-0242ac110004
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f6394666-2316-11ea-851f-0242ac110004
STEP: Updating secret s-test-opt-upd-f6394861-2316-11ea-851f-0242ac110004
STEP: Creating secret with name s-test-opt-create-f63948a9-2316-11ea-851f-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:54:54.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-67lnm" for this suite.
Dec 20 10:55:18.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:55:18.451: INFO: namespace: e2e-tests-projected-67lnm, resource: bindings, ignored listing per whitelist
Dec 20 10:55:18.532: INFO: namespace e2e-tests-projected-67lnm deletion completed in 24.246319792s

• [SLOW TEST:107.220 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:55:18.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 10:55:18.793: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 20 10:55:18.817: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 20 10:55:23.841: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 20 10:55:28.392: INFO: Creating deployment "test-rolling-update-deployment"
Dec 20 10:55:28.409: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 20 10:55:28.448: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 20 10:55:30.577: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 20 10:55:30.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 10:55:32.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 10:55:35.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 10:55:36.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712436128, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 10:55:38.602: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 20 10:55:38.620: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-zncrn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zncrn/deployments/test-rolling-update-deployment,UID:3be08303-2317-11ea-a994-fa163e34d433,ResourceVersion:15444680,Generation:1,CreationTimestamp:2019-12-20 10:55:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-20 10:55:28 +0000 UTC 2019-12-20 10:55:28 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-20 10:55:38 +0000 UTC 2019-12-20 10:55:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 20 10:55:38.624: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-zncrn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zncrn/replicasets/test-rolling-update-deployment-75db98fb4c,UID:3bf01641-2317-11ea-a994-fa163e34d433,ResourceVersion:15444671,Generation:1,CreationTimestamp:2019-12-20 10:55:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3be08303-2317-11ea-a994-fa163e34d433 0xc0012def17 0xc0012def18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 20 10:55:38.624: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 20 10:55:38.624: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-zncrn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zncrn/replicasets/test-rolling-update-controller,UID:36272899-2317-11ea-a994-fa163e34d433,ResourceVersion:15444679,Generation:2,CreationTimestamp:2019-12-20 10:55:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3be08303-2317-11ea-a994-fa163e34d433 0xc0012dee57 0xc0012dee58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 10:55:38.629: INFO: Pod "test-rolling-update-deployment-75db98fb4c-8prrh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-8prrh,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-zncrn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zncrn/pods/test-rolling-update-deployment-75db98fb4c-8prrh,UID:3bfb3bbd-2317-11ea-a994-fa163e34d433,ResourceVersion:15444670,Generation:0,CreationTimestamp:2019-12-20 10:55:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 3bf01641-2317-11ea-a994-fa163e34d433 0xc0012df7e7 0xc0012df7e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ddxdc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ddxdc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-ddxdc true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012df850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012df870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 10:55:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 10:55:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 10:55:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 10:55:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-20 10:55:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-20 10:55:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ee1d35b066e22f134feb5a76e19ca099dff40aaa5acddbb1e9b2879a188763b2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:55:38.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-zncrn" for this suite.
Dec 20 10:55:45.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:55:45.314: INFO: namespace: e2e-tests-deployment-zncrn, resource: bindings, ignored listing per whitelist
Dec 20 10:55:45.377: INFO: namespace e2e-tests-deployment-zncrn deletion completed in 6.742654751s

• [SLOW TEST:26.845 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:55:45.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 10:55:47.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-5mlgg" to be "success or failure"
Dec 20 10:55:47.857: INFO: Pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 60.902256ms
Dec 20 10:55:49.905: INFO: Pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108490464s
Dec 20 10:55:51.933: INFO: Pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136904269s
Dec 20 10:55:54.151: INFO: Pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.35466056s
Dec 20 10:55:56.168: INFO: Pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371257502s
Dec 20 10:55:58.201: INFO: Pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.404295353s
STEP: Saw pod success
Dec 20 10:55:58.201: INFO: Pod "downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:55:58.213: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 10:55:58.455: INFO: Waiting for pod downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004 to disappear
Dec 20 10:55:58.547: INFO: Pod downwardapi-volume-476d3895-2317-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:55:58.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5mlgg" for this suite.
Dec 20 10:56:06.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:56:06.771: INFO: namespace: e2e-tests-downward-api-5mlgg, resource: bindings, ignored listing per whitelist
Dec 20 10:56:06.834: INFO: namespace e2e-tests-downward-api-5mlgg deletion completed in 8.269975633s

• [SLOW TEST:21.456 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:56:06.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 10:56:07.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-4ctsq" to be "success or failure"
Dec 20 10:56:07.279: INFO: Pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 111.022664ms
Dec 20 10:56:09.598: INFO: Pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43067546s
Dec 20 10:56:11.615: INFO: Pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447788716s
Dec 20 10:56:13.896: INFO: Pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.728473812s
Dec 20 10:56:16.490: INFO: Pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.322863389s
Dec 20 10:56:18.528: INFO: Pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.360019832s
STEP: Saw pod success
Dec 20 10:56:18.528: INFO: Pod "downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 10:56:18.544: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 10:56:18.716: INFO: Waiting for pod downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004 to disappear
Dec 20 10:56:18.728: INFO: Pod downwardapi-volume-52f8e7aa-2317-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:56:18.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4ctsq" for this suite.
Dec 20 10:56:24.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:56:24.889: INFO: namespace: e2e-tests-projected-4ctsq, resource: bindings, ignored listing per whitelist
Dec 20 10:56:24.907: INFO: namespace e2e-tests-projected-4ctsq deletion completed in 6.172647198s

• [SLOW TEST:18.073 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:56:24.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 10:56:25.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hklg2" for this suite.
Dec 20 10:56:31.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 10:56:31.670: INFO: namespace: e2e-tests-kubelet-test-hklg2, resource: bindings, ignored listing per whitelist
Dec 20 10:56:31.670: INFO: namespace e2e-tests-kubelet-test-hklg2 deletion completed in 6.272843393s

• [SLOW TEST:6.762 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 10:56:31.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nbwxm
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nbwxm
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nbwxm
Dec 20 10:56:32.125: INFO: Found 0 stateful pods, waiting for 1
Dec 20 10:56:42.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 20 10:56:42.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 10:56:43.037: INFO: stderr: ""
Dec 20 10:56:43.037: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 10:56:43.037: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 10:56:43.056: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 20 10:56:53.070: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 10:56:53.070: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 10:56:53.165: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999498s
Dec 20 10:56:54.200: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.931889264s
Dec 20 10:56:55.219: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.896443828s
Dec 20 10:56:56.243: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.877463693s
Dec 20 10:56:57.280: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.85360033s
Dec 20 10:56:58.296: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.816506105s
Dec 20 10:56:59.316: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.801155179s
Dec 20 10:57:00.361: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.780482049s
Dec 20 10:57:01.379: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.735203388s
Dec 20 10:57:02.414: INFO: Verifying statefulset ss doesn't scale past 1 for another 717.774963ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nbwxm
Dec 20 10:57:03.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:57:04.692: INFO: stderr: ""
Dec 20 10:57:04.692: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 10:57:04.692: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 10:57:04.709: INFO: Found 1 stateful pods, waiting for 3
Dec 20 10:57:14.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 10:57:14.733: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 10:57:14.733: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 20 10:57:24.729: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 10:57:24.729: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 10:57:24.729: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 20 10:57:34.737: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 10:57:34.738: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 10:57:34.738: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 20 10:57:34.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 10:57:35.290: INFO: stderr: ""
Dec 20 10:57:35.291: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 10:57:35.291: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 10:57:35.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 10:57:35.945: INFO: stderr: ""
Dec 20 10:57:35.945: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 10:57:35.945: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 10:57:35.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 10:57:36.853: INFO: stderr: ""
Dec 20 10:57:36.853: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 10:57:36.853: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 10:57:36.853: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 10:57:36.872: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 20 10:57:46.951: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 10:57:46.951: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 10:57:46.951: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 10:57:47.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999058s
Dec 20 10:57:48.056: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986104619s
Dec 20 10:57:49.080: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.951590983s
Dec 20 10:57:50.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.927473105s
Dec 20 10:57:51.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.896059182s
Dec 20 10:57:52.170: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.873892048s
Dec 20 10:57:53.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.837015851s
Dec 20 10:57:54.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.171888245s
Dec 20 10:57:55.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.155635324s
Dec 20 10:57:56.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 86.243462ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nbwxm
Dec 20 10:57:57.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:57:58.631: INFO: stderr: ""
Dec 20 10:57:58.631: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 10:57:58.631: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 10:57:58.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:57:59.236: INFO: stderr: ""
Dec 20 10:57:59.236: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 10:57:59.236: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 10:57:59.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:57:59.677: INFO: rc: 126
Dec 20 10:57:59.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/14368/ns/ipc\" caused \"lstat /proc/14368/ns/ipc: no such file or directory\"": unknown
 command terminated with exit code 126
 []  0xc0011a3590 exit status 126   true [0xc001d7a590 0xc001d7a5a8 0xc001d7a5c0] [0xc001d7a590 0xc001d7a5a8 0xc001d7a5c0] [0xc001d7a5a0 0xc001d7a5b8] [0x935700 0x935700] 0xc001c0fd40 }:
Command stdout:
OCI runtime exec failed: exec failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/14368/ns/ipc\" caused \"lstat /proc/14368/ns/ipc: no such file or directory\"": unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Dec 20 10:58:09.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:58:09.944: INFO: rc: 1
Dec 20 10:58:09.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0011a3740 exit status 1   true [0xc001d7a5c8 0xc001d7a5e0 0xc001d7a5f8] [0xc001d7a5c8 0xc001d7a5e0 0xc001d7a5f8] [0xc001d7a5d8 0xc001d7a5f0] [0x935700 0x935700] 0xc00170d200 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 20 10:58:19.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:58:20.136: INFO: rc: 1
Dec 20 10:58:20.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0011a3860 exit status 1   true [0xc001d7a600 0xc001d7a618 0xc001d7a630] [0xc001d7a600 0xc001d7a618 0xc001d7a630] [0xc001d7a610 0xc001d7a628] [0x935700 0x935700] 0xc001837f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:58:30.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:58:30.291: INFO: rc: 1
Dec 20 10:58:30.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001be86c0 exit status 1   true [0xc001da07b0 0xc001da07c8 0xc001da07e0] [0xc001da07b0 0xc001da07c8 0xc001da07e0] [0xc001da07c0 0xc001da07d8] [0x935700 0x935700] 0xc0015ecc00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:58:40.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:58:40.471: INFO: rc: 1
Dec 20 10:58:40.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023c0120 exit status 1   true [0xc000414d48 0xc000414f28 0xc0004151a8] [0xc000414d48 0xc000414f28 0xc0004151a8] [0xc000414eb8 0xc000415190] [0x935700 0x935700] 0xc00170cd20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:58:50.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:58:50.626: INFO: rc: 1
Dec 20 10:58:50.626: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023c0270 exit status 1   true [0xc0004151c8 0xc0004152b0 0xc0004153e0] [0xc0004151c8 0xc0004152b0 0xc0004153e0] [0xc000415240 0xc000415338] [0x935700 0x935700] 0xc000cede60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:59:00.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:59:00.794: INFO: rc: 1
Dec 20 10:59:00.794: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0003122a0 exit status 1   true [0xc00000e2a8 0xc000224198 0xc000224208] [0xc00000e2a8 0xc000224198 0xc000224208] [0xc000224190 0xc0002241c0] [0x935700 0x935700] 0xc0014b0840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:59:10.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:59:10.940: INFO: rc: 1
Dec 20 10:59:10.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000312db0 exit status 1   true [0xc000224230 0xc000224318 0xc0002243c0] [0xc000224230 0xc000224318 0xc0002243c0] [0xc000224310 0xc0002243a8] [0x935700 0x935700] 0xc001c0f3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:59:20.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:59:21.065: INFO: rc: 1
Dec 20 10:59:21.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001e4120 exit status 1   true [0xc001d7a000 0xc001d7a018 0xc001d7a030] [0xc001d7a000 0xc001d7a018 0xc001d7a030] [0xc001d7a010 0xc001d7a028] [0x935700 0x935700] 0xc0017a4a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:59:31.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:59:31.249: INFO: rc: 1
Dec 20 10:59:31.250: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000313080 exit status 1   true [0xc0002243c8 0xc000224438 0xc000224470] [0xc0002243c8 0xc000224438 0xc000224470] [0xc000224428 0xc000224458] [0x935700 0x935700] 0xc001c0fce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:59:41.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:59:41.452: INFO: rc: 1
Dec 20 10:59:41.452: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0004e76e0 exit status 1   true [0xc001dba000 0xc001dba018 0xc001dba030] [0xc001dba000 0xc001dba018 0xc001dba030] [0xc001dba010 0xc001dba028] [0x935700 0x935700] 0xc001586a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 10:59:51.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 10:59:51.631: INFO: rc: 1
Dec 20 10:59:51.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001e4270 exit status 1   true [0xc001d7a038 0xc001d7a050 0xc001d7a068] [0xc001d7a038 0xc001d7a050 0xc001d7a068] [0xc001d7a048 0xc001d7a060] [0x935700 0x935700] 0xc0017a5aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:00:01.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:00:01.822: INFO: rc: 1
Dec 20 11:00:01.823: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000313200 exit status 1   true [0xc0002244b8 0xc000224548 0xc000224578] [0xc0002244b8 0xc000224548 0xc000224578] [0xc0002244f8 0xc000224568] [0x935700 0x935700] 0xc001652000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:00:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:00:11.978: INFO: rc: 1
Dec 20 11:00:11.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0004e7830 exit status 1   true [0xc001dba038 0xc001dba050 0xc001dba068] [0xc001dba038 0xc001dba050 0xc001dba068] [0xc001dba048 0xc001dba060] [0x935700 0x935700] 0xc001586d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:00:21.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:00:22.166: INFO: rc: 1
Dec 20 11:00:22.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000313350 exit status 1   true [0xc000224588 0xc0002245a0 0xc0002245e8] [0xc000224588 0xc0002245a0 0xc0002245e8] [0xc000224598 0xc0002245c0] [0x935700 0x935700] 0xc001652360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:00:32.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:00:32.348: INFO: rc: 1
Dec 20 11:00:32.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001e43c0 exit status 1   true [0xc001d7a070 0xc001d7a088 0xc001d7a0a0] [0xc001d7a070 0xc001d7a088 0xc001d7a0a0] [0xc001d7a080 0xc001d7a098] [0x935700 0x935700] 0xc0017a5da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:00:42.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:00:42.554: INFO: rc: 1
Dec 20 11:00:42.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023c0000 exit status 1   true [0xc000414d48 0xc000414f28 0xc0004151a8] [0xc000414d48 0xc000414f28 0xc0004151a8] [0xc000414eb8 0xc000415190] [0x935700 0x935700] 0xc001c0fbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:00:52.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:00:52.707: INFO: rc: 1
Dec 20 11:00:52.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000312f00 exit status 1   true [0xc001d7a000 0xc001d7a018 0xc001d7a030] [0xc001d7a000 0xc001d7a018 0xc001d7a030] [0xc001d7a010 0xc001d7a028] [0x935700 0x935700] 0xc0014b1f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:01:02.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:01:02.870: INFO: rc: 1
Dec 20 11:01:02.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023c01b0 exit status 1   true [0xc0004151c8 0xc0004152b0 0xc0004153e0] [0xc0004151c8 0xc0004152b0 0xc0004153e0] [0xc000415240 0xc000415338] [0x935700 0x935700] 0xc001c0fec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:01:12.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:01:13.028: INFO: rc: 1
Dec 20 11:01:13.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001e4150 exit status 1   true [0xc000224150 0xc0002241b8 0xc000224230] [0xc000224150 0xc0002241b8 0xc000224230] [0xc000224198 0xc000224208] [0x935700 0x935700] 0xc000cec720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:01:23.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:01:23.207: INFO: rc: 1
Dec 20 11:01:23.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001e42d0 exit status 1   true [0xc0002242a0 0xc000224398 0xc0002243c8] [0xc0002242a0 0xc000224398 0xc0002243c8] [0xc000224318 0xc0002243c0] [0x935700 0x935700] 0xc00170dd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:01:33.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:01:33.389: INFO: rc: 1
Dec 20 11:01:33.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0003130e0 exit status 1   true [0xc001d7a038 0xc001d7a050 0xc001d7a068] [0xc001d7a038 0xc001d7a050 0xc001d7a068] [0xc001d7a048 0xc001d7a060] [0x935700 0x935700] 0xc0016522a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:01:43.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:01:43.546: INFO: rc: 1
Dec 20 11:01:43.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023c0360 exit status 1   true [0xc0004153f0 0xc000415500 0xc0004155a8] [0xc0004153f0 0xc000415500 0xc0004155a8] [0xc000415498 0xc0004155a0] [0x935700 0x935700] 0xc0017a48a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:01:53.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:01:53.735: INFO: rc: 1
Dec 20 11:01:53.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000313230 exit status 1   true [0xc001d7a070 0xc001d7a088 0xc001d7a0a0] [0xc001d7a070 0xc001d7a088 0xc001d7a0a0] [0xc001d7a080 0xc001d7a098] [0x935700 0x935700] 0xc001652960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:02:03.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:02:03.899: INFO: rc: 1
Dec 20 11:02:03.899: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023c0480 exit status 1   true [0xc000415628 0xc0004156d8 0xc0004157c0] [0xc000415628 0xc0004156d8 0xc0004157c0] [0xc000415670 0xc000415770] [0x935700 0x935700] 0xc0017a5a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:02:13.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:02:14.048: INFO: rc: 1
Dec 20 11:02:14.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001e4450 exit status 1   true [0xc000224410 0xc000224440 0xc0002244b8] [0xc000224410 0xc000224440 0xc0002244b8] [0xc000224438 0xc000224470] [0x935700 0x935700] 0xc001586060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:02:24.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:02:24.209: INFO: rc: 1
Dec 20 11:02:24.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0003133b0 exit status 1   true [0xc001d7a0a8 0xc001d7a0c0 0xc001d7a0d8] [0xc001d7a0a8 0xc001d7a0c0 0xc001d7a0d8] [0xc001d7a0b8 0xc001d7a0d0] [0x935700 0x935700] 0xc001653560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:02:34.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:02:34.345: INFO: rc: 1
Dec 20 11:02:34.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0004e7710 exit status 1   true [0xc001dba000 0xc001dba018 0xc001dba030] [0xc001dba000 0xc001dba018 0xc001dba030] [0xc001dba010 0xc001dba028] [0x935700 0x935700] 0xc0013ee6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:02:44.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:02:44.519: INFO: rc: 1
Dec 20 11:02:44.519: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0003122a0 exit status 1   true [0xc001d7a000 0xc001d7a018 0xc001d7a030] [0xc001d7a000 0xc001d7a018 0xc001d7a030] [0xc001d7a010 0xc001d7a028] [0x935700 0x935700] 0xc00170cd20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:02:54.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:02:54.693: INFO: rc: 1
Dec 20 11:02:54.693: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0001e4120 exit status 1   true [0xc000224150 0xc0002241b8 0xc000224230] [0xc000224150 0xc0002241b8 0xc000224230] [0xc000224198 0xc000224208] [0x935700 0x935700] 0xc00144a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 20 11:03:04.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nbwxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:03:04.863: INFO: rc: 1
Dec 20 11:03:04.864: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 20 11:03:04.864: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 20 11:03:04.915: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nbwxm
Dec 20 11:03:04.923: INFO: Scaling statefulset ss to 0
Dec 20 11:03:04.940: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 11:03:04.944: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:03:04.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nbwxm" for this suite.
Dec 20 11:03:13.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:03:13.231: INFO: namespace: e2e-tests-statefulset-nbwxm, resource: bindings, ignored listing per whitelist
Dec 20 11:03:13.338: INFO: namespace e2e-tests-statefulset-nbwxm deletion completed in 8.342445631s

• [SLOW TEST:401.668 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:03:13.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pg448;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pg448;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pg448.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 28.6.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.6.28_udp@PTR;check="$$(dig +tcp +noall +answer +search 28.6.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.6.28_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pg448;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pg448;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pg448.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pg448.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pg448.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 28.6.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.6.28_udp@PTR;check="$$(dig +tcp +noall +answer +search 28.6.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.6.28_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 20 11:03:28.024: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.029: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.036: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-pg448 from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.044: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-pg448 from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.188: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.202: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.211: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.218: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.225: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.232: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.240: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.247: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.252: INFO: Unable to read 10.98.6.28_udp@PTR from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.257: INFO: Unable to read 10.98.6.28_tcp@PTR from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.263: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.268: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.274: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pg448 from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.282: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pg448 from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.289: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.294: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.299: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.304: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.309: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.314: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.318: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.323: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.327: INFO: Unable to read 10.98.6.28_udp@PTR from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.331: INFO: Unable to read 10.98.6.28_tcp@PTR from pod e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-5131f0b2-2318-11ea-851f-0242ac110004)
Dec 20 11:03:28.331: INFO: Lookups using e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-pg448 wheezy_tcp@dns-test-service.e2e-tests-dns-pg448 wheezy_udp@dns-test-service.e2e-tests-dns-pg448.svc wheezy_tcp@dns-test-service.e2e-tests-dns-pg448.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.98.6.28_udp@PTR 10.98.6.28_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pg448 jessie_tcp@dns-test-service.e2e-tests-dns-pg448 jessie_udp@dns-test-service.e2e-tests-dns-pg448.svc jessie_tcp@dns-test-service.e2e-tests-dns-pg448.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pg448.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pg448.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.98.6.28_udp@PTR 10.98.6.28_tcp@PTR]

Dec 20 11:03:33.841: INFO: DNS probes using e2e-tests-dns-pg448/dns-test-5131f0b2-2318-11ea-851f-0242ac110004 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:03:34.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-pg448" for this suite.
Dec 20 11:03:42.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:03:42.677: INFO: namespace: e2e-tests-dns-pg448, resource: bindings, ignored listing per whitelist
Dec 20 11:03:42.946: INFO: namespace e2e-tests-dns-pg448 deletion completed in 8.455332357s

• [SLOW TEST:29.607 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:03:42.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004
Dec 20 11:03:43.284: INFO: Pod name my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004: Found 0 pods out of 1
Dec 20 11:03:48.308: INFO: Pod name my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004: Found 1 pods out of 1
Dec 20 11:03:48.308: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004" are running
Dec 20 11:03:54.336: INFO: Pod "my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004-th78p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 11:03:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 11:03:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 11:03:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 11:03:43 +0000 UTC Reason: Message:}])
Dec 20 11:03:54.336: INFO: Trying to dial the pod
Dec 20 11:03:59.514: INFO: Controller my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004: Got expected result from replica 1 [my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004-th78p]: "my-hostname-basic-62c9836b-2318-11ea-851f-0242ac110004-th78p", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:03:59.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-47mmc" for this suite.
Dec 20 11:04:07.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:04:08.167: INFO: namespace: e2e-tests-replication-controller-47mmc, resource: bindings, ignored listing per whitelist
Dec 20 11:04:08.589: INFO: namespace e2e-tests-replication-controller-47mmc deletion completed in 9.056021254s

• [SLOW TEST:25.643 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:04:08.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 11:04:09.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-9kn7q" to be "success or failure"
Dec 20 11:04:09.657: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 86.867599ms
Dec 20 11:04:11.670: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099845284s
Dec 20 11:04:13.744: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173942248s
Dec 20 11:04:15.845: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274883124s
Dec 20 11:04:17.885: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314999929s
Dec 20 11:04:19.900: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.329253746s
Dec 20 11:04:21.917: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.34668613s
STEP: Saw pod success
Dec 20 11:04:21.917: INFO: Pod "downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:04:21.927: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 11:04:22.021: INFO: Waiting for pod downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004 to disappear
Dec 20 11:04:22.030: INFO: Pod downwardapi-volume-7272aa01-2318-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:04:22.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9kn7q" for this suite.
Dec 20 11:04:28.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:04:28.639: INFO: namespace: e2e-tests-projected-9kn7q, resource: bindings, ignored listing per whitelist
Dec 20 11:04:28.757: INFO: namespace e2e-tests-projected-9kn7q deletion completed in 6.286119006s

• [SLOW TEST:20.168 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:04:28.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 20 11:04:29.024: INFO: Waiting up to 5m0s for pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004" in namespace "e2e-tests-var-expansion-2p6h5" to be "success or failure"
Dec 20 11:04:29.156: INFO: Pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 132.015457ms
Dec 20 11:04:31.208: INFO: Pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183914488s
Dec 20 11:04:33.238: INFO: Pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21328369s
Dec 20 11:04:35.443: INFO: Pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418910894s
Dec 20 11:04:37.841: INFO: Pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816249375s
Dec 20 11:04:39.884: INFO: Pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.859169036s
STEP: Saw pod success
Dec 20 11:04:39.884: INFO: Pod "var-expansion-7e19c38d-2318-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:04:39.912: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7e19c38d-2318-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 11:04:40.786: INFO: Waiting for pod var-expansion-7e19c38d-2318-11ea-851f-0242ac110004 to disappear
Dec 20 11:04:40.836: INFO: Pod var-expansion-7e19c38d-2318-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:04:40.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2p6h5" for this suite.
Dec 20 11:04:46.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:04:47.007: INFO: namespace: e2e-tests-var-expansion-2p6h5, resource: bindings, ignored listing per whitelist
Dec 20 11:04:47.070: INFO: namespace e2e-tests-var-expansion-2p6h5 deletion completed in 6.220006811s

• [SLOW TEST:18.312 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:04:47.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-97nls
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 20 11:04:47.240: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 20 11:05:21.495: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-97nls PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:05:21.495: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:05:21.914: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:05:21.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-97nls" for this suite.
Dec 20 11:05:47.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:05:48.027: INFO: namespace: e2e-tests-pod-network-test-97nls, resource: bindings, ignored listing per whitelist
Dec 20 11:05:48.135: INFO: namespace e2e-tests-pod-network-test-97nls deletion completed in 26.199374765s

• [SLOW TEST:61.065 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:05:48.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 20 11:05:59.188: INFO: Successfully updated pod "labelsupdatead66095f-2318-11ea-851f-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:06:01.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8m26c" for this suite.
Dec 20 11:06:25.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:06:25.492: INFO: namespace: e2e-tests-projected-8m26c, resource: bindings, ignored listing per whitelist
Dec 20 11:06:25.541: INFO: namespace e2e-tests-projected-8m26c deletion completed in 24.22326815s

• [SLOW TEST:37.405 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:06:25.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 20 11:06:26.352: INFO: created pod pod-service-account-defaultsa
Dec 20 11:06:26.353: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 20 11:06:26.377: INFO: created pod pod-service-account-mountsa
Dec 20 11:06:26.377: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 20 11:06:26.425: INFO: created pod pod-service-account-nomountsa
Dec 20 11:06:26.425: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 20 11:06:26.444: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 20 11:06:26.445: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 20 11:06:26.604: INFO: created pod pod-service-account-mountsa-mountspec
Dec 20 11:06:26.604: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 20 11:06:26.674: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 20 11:06:26.675: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 20 11:06:26.844: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 20 11:06:26.844: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 20 11:06:26.896: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 20 11:06:26.896: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 20 11:06:27.734: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 20 11:06:27.734: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:06:27.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-rnbkh" for this suite.
Dec 20 11:06:58.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:06:58.188: INFO: namespace: e2e-tests-svcaccounts-rnbkh, resource: bindings, ignored listing per whitelist
Dec 20 11:06:58.349: INFO: namespace e2e-tests-svcaccounts-rnbkh deletion completed in 29.345711482s

• [SLOW TEST:32.808 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:06:58.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 11:06:58.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 20 11:06:58.683: INFO: stderr: ""
Dec 20 11:06:58.683: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 20 11:06:58.689: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:06:58.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-49z6l" for this suite.
Dec 20 11:07:04.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:07:04.877: INFO: namespace: e2e-tests-kubectl-49z6l, resource: bindings, ignored listing per whitelist
Dec 20 11:07:04.901: INFO: namespace e2e-tests-kubectl-49z6l deletion completed in 6.199371204s

S [SKIPPING] [6.552 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 20 11:06:58.689: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:07:04.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 20 11:07:05.264: INFO: Waiting up to 5m0s for pod "pod-db36f4de-2318-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-z79gk" to be "success or failure"
Dec 20 11:07:05.286: INFO: Pod "pod-db36f4de-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.01009ms
Dec 20 11:07:07.469: INFO: Pod "pod-db36f4de-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204246322s
Dec 20 11:07:09.482: INFO: Pod "pod-db36f4de-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217718998s
Dec 20 11:07:11.771: INFO: Pod "pod-db36f4de-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.506788428s
Dec 20 11:07:13.784: INFO: Pod "pod-db36f4de-2318-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519139284s
Dec 20 11:07:15.794: INFO: Pod "pod-db36f4de-2318-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.529487426s
STEP: Saw pod success
Dec 20 11:07:15.794: INFO: Pod "pod-db36f4de-2318-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:07:15.799: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-db36f4de-2318-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:07:16.985: INFO: Waiting for pod pod-db36f4de-2318-11ea-851f-0242ac110004 to disappear
Dec 20 11:07:16.992: INFO: Pod pod-db36f4de-2318-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:07:16.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-z79gk" for this suite.
Dec 20 11:07:23.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:07:23.128: INFO: namespace: e2e-tests-emptydir-z79gk, resource: bindings, ignored listing per whitelist
Dec 20 11:07:23.181: INFO: namespace e2e-tests-emptydir-z79gk deletion completed in 6.178426389s

• [SLOW TEST:18.280 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:07:23.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gz8l9
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-gz8l9
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-gz8l9
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-gz8l9
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-gz8l9
Dec 20 11:07:38.031: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gz8l9, name: ss-0, uid: ed7ecf40-2318-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 20 11:07:42.473: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gz8l9, name: ss-0, uid: ed7ecf40-2318-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 20 11:07:42.554: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-gz8l9, name: ss-0, uid: ed7ecf40-2318-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 20 11:07:42.583: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-gz8l9
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-gz8l9
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-gz8l9 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 20 11:07:54.022: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gz8l9
Dec 20 11:07:54.030: INFO: Scaling statefulset ss to 0
Dec 20 11:08:04.171: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 11:08:04.176: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:08:04.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gz8l9" for this suite.
Dec 20 11:08:12.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:08:12.464: INFO: namespace: e2e-tests-statefulset-gz8l9, resource: bindings, ignored listing per whitelist
Dec 20 11:08:12.761: INFO: namespace e2e-tests-statefulset-gz8l9 deletion completed in 8.477898664s

• [SLOW TEST:49.580 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:08:12.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 20 11:08:13.039: INFO: Waiting up to 5m0s for pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-4m8gv" to be "success or failure"
Dec 20 11:08:13.079: INFO: Pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.552344ms
Dec 20 11:08:15.462: INFO: Pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422752974s
Dec 20 11:08:17.484: INFO: Pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444645091s
Dec 20 11:08:19.568: INFO: Pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528502469s
Dec 20 11:08:21.915: INFO: Pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 8.876030357s
Dec 20 11:08:24.074: INFO: Pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.03469112s
STEP: Saw pod success
Dec 20 11:08:24.074: INFO: Pod "downward-api-03a05dd2-2319-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:08:24.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-03a05dd2-2319-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 11:08:24.720: INFO: Waiting for pod downward-api-03a05dd2-2319-11ea-851f-0242ac110004 to disappear
Dec 20 11:08:24.738: INFO: Pod downward-api-03a05dd2-2319-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:08:24.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4m8gv" for this suite.
Dec 20 11:08:32.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:08:32.961: INFO: namespace: e2e-tests-downward-api-4m8gv, resource: bindings, ignored listing per whitelist
Dec 20 11:08:32.966: INFO: namespace e2e-tests-downward-api-4m8gv deletion completed in 8.220829758s

• [SLOW TEST:20.203 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:08:32.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 11:08:33.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-nxsqh" to be "success or failure"
Dec 20 11:08:33.179: INFO: Pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514636ms
Dec 20 11:08:36.679: INFO: Pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504419256s
Dec 20 11:08:38.709: INFO: Pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.534735219s
Dec 20 11:08:40.776: INFO: Pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.601818346s
Dec 20 11:08:42.807: INFO: Pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.632025943s
Dec 20 11:08:44.839: INFO: Pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.6639877s
STEP: Saw pod success
Dec 20 11:08:44.839: INFO: Pod "downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:08:44.857: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 11:08:45.639: INFO: Waiting for pod downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004 to disappear
Dec 20 11:08:45.685: INFO: Pod downwardapi-volume-0fa0246a-2319-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:08:45.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nxsqh" for this suite.
Dec 20 11:08:52.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:08:52.368: INFO: namespace: e2e-tests-projected-nxsqh, resource: bindings, ignored listing per whitelist
Dec 20 11:08:52.444: INFO: namespace e2e-tests-projected-nxsqh deletion completed in 6.74122797s

• [SLOW TEST:19.478 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:08:52.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 20 11:08:52.838: INFO: Waiting up to 5m0s for pod "client-containers-1b5212dd-2319-11ea-851f-0242ac110004" in namespace "e2e-tests-containers-srgf4" to be "success or failure"
Dec 20 11:08:52.852: INFO: Pod "client-containers-1b5212dd-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.307219ms
Dec 20 11:08:54.876: INFO: Pod "client-containers-1b5212dd-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03808267s
Dec 20 11:08:56.918: INFO: Pod "client-containers-1b5212dd-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079914124s
Dec 20 11:08:58.943: INFO: Pod "client-containers-1b5212dd-2319-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105334521s
Dec 20 11:09:00.960: INFO: Pod "client-containers-1b5212dd-2319-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121643327s
STEP: Saw pod success
Dec 20 11:09:00.960: INFO: Pod "client-containers-1b5212dd-2319-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:09:00.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-1b5212dd-2319-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:09:01.182: INFO: Waiting for pod client-containers-1b5212dd-2319-11ea-851f-0242ac110004 to disappear
Dec 20 11:09:01.205: INFO: Pod client-containers-1b5212dd-2319-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:09:01.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-srgf4" for this suite.
Dec 20 11:09:07.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:09:07.604: INFO: namespace: e2e-tests-containers-srgf4, resource: bindings, ignored listing per whitelist
Dec 20 11:09:07.725: INFO: namespace e2e-tests-containers-srgf4 deletion completed in 6.502957474s

• [SLOW TEST:15.281 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:09:07.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 20 11:09:08.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-474ww run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 20 11:09:19.244: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 20 11:09:19.244: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:09:21.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-474ww" for this suite.
Dec 20 11:09:28.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:09:28.349: INFO: namespace: e2e-tests-kubectl-474ww, resource: bindings, ignored listing per whitelist
Dec 20 11:09:28.361: INFO: namespace e2e-tests-kubectl-474ww deletion completed in 6.491250627s

• [SLOW TEST:20.636 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:09:28.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:10:28.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-drkrv" for this suite.
Dec 20 11:10:52.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:10:52.713: INFO: namespace: e2e-tests-container-probe-drkrv, resource: bindings, ignored listing per whitelist
Dec 20 11:10:52.819: INFO: namespace e2e-tests-container-probe-drkrv deletion completed in 24.176992695s

• [SLOW TEST:84.456 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:10:52.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:11:03.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mr65d" for this suite.
Dec 20 11:11:59.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:11:59.321: INFO: namespace: e2e-tests-kubelet-test-mr65d, resource: bindings, ignored listing per whitelist
Dec 20 11:11:59.387: INFO: namespace e2e-tests-kubelet-test-mr65d deletion completed in 56.282366387s

• [SLOW TEST:66.567 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:11:59.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 20 11:11:59.649: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix564923879/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:11:59.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jrxm8" for this suite.
Dec 20 11:12:05.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:12:05.922: INFO: namespace: e2e-tests-kubectl-jrxm8, resource: bindings, ignored listing per whitelist
Dec 20 11:12:05.941: INFO: namespace e2e-tests-kubectl-jrxm8 deletion completed in 6.192643249s

• [SLOW TEST:6.553 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:12:05.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1220 11:12:36.927781       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 11:12:36.928: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:12:36.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-wndsk" for this suite.
Dec 20 11:12:50.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:12:51.059: INFO: namespace: e2e-tests-gc-wndsk, resource: bindings, ignored listing per whitelist
Dec 20 11:12:51.155: INFO: namespace e2e-tests-gc-wndsk deletion completed in 14.220600178s

• [SLOW TEST:45.214 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:12:51.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-jg9hv
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 20 11:12:52.728: INFO: Found 0 stateful pods, waiting for 3
Dec 20 11:13:03.001: INFO: Found 1 stateful pods, waiting for 3
Dec 20 11:13:12.767: INFO: Found 2 stateful pods, waiting for 3
Dec 20 11:13:22.753: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 11:13:22.753: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 11:13:22.753: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 20 11:13:32.742: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 11:13:32.742: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 11:13:32.742: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 11:13:32.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jg9hv ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 11:13:33.287: INFO: stderr: ""
Dec 20 11:13:33.287: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 11:13:33.287: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 20 11:13:43.372: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 20 11:13:53.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jg9hv ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:13:54.234: INFO: stderr: ""
Dec 20 11:13:54.234: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 11:13:54.234: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 11:14:04.305: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:14:04.305: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 11:14:04.305: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 11:14:14.328: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:14:14.328: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 11:14:14.328: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 11:14:24.340: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:14:24.340: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 11:14:34.363: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:14:34.363: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 11:14:44.685: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 20 11:14:54.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jg9hv ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 11:14:55.010: INFO: stderr: ""
Dec 20 11:14:55.010: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 11:14:55.010: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 11:14:55.179: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 20 11:15:05.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jg9hv ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 11:15:06.070: INFO: stderr: ""
Dec 20 11:15:06.070: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 11:15:06.070: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 11:15:06.287: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:15:06.287: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:06.287: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:06.287: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:16.314: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:15:16.314: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:16.314: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:26.684: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:15:26.685: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:26.685: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:36.333: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:15:36.333: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:46.322: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
Dec 20 11:15:46.322: INFO: Waiting for Pod e2e-tests-statefulset-jg9hv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 20 11:15:56.342: INFO: Waiting for StatefulSet e2e-tests-statefulset-jg9hv/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 20 11:16:06.501: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jg9hv
Dec 20 11:16:06.780: INFO: Scaling statefulset ss2 to 0
Dec 20 11:16:37.102: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 11:16:37.108: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:16:37.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-jg9hv" for this suite.
Dec 20 11:16:45.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:16:45.354: INFO: namespace: e2e-tests-statefulset-jg9hv, resource: bindings, ignored listing per whitelist
Dec 20 11:16:45.432: INFO: namespace e2e-tests-statefulset-jg9hv deletion completed in 8.275091107s

• [SLOW TEST:234.277 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:16:45.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 20 11:19:50.079: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:19:50.103: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:19:52.103: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:19:52.117: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:19:54.103: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:19:54.116: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:19:56.103: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:19:57.431: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:19:58.103: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:19:58.118: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:00.103: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:00.133: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:02.103: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:02.134: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:04.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:04.138: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:06.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:06.122: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:08.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:08.122: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:10.103: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:10.448: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:12.105: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:12.124: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 20 11:20:14.104: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 20 11:20:14.116: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:20:14.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-v7l42" for this suite.
Dec 20 11:20:38.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:20:38.219: INFO: namespace: e2e-tests-container-lifecycle-hook-v7l42, resource: bindings, ignored listing per whitelist
Dec 20 11:20:38.409: INFO: namespace e2e-tests-container-lifecycle-hook-v7l42 deletion completed in 24.286263553s

• [SLOW TEST:232.976 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:20:38.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-x5tn
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 11:20:38.774: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-x5tn" in namespace "e2e-tests-subpath-npxqf" to be "success or failure"
Dec 20 11:20:38.801: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 26.751634ms
Dec 20 11:20:40.839: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064529961s
Dec 20 11:20:42.910: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135547394s
Dec 20 11:20:44.927: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152098194s
Dec 20 11:20:46.943: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168390633s
Dec 20 11:20:48.957: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182327327s
Dec 20 11:20:50.973: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.198216523s
Dec 20 11:20:52.986: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.210993452s
Dec 20 11:20:54.995: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.220018075s
Dec 20 11:20:57.011: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 18.236530403s
Dec 20 11:20:59.049: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 20.274727896s
Dec 20 11:21:01.067: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 22.292177003s
Dec 20 11:21:03.081: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 24.306769013s
Dec 20 11:21:05.098: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 26.323352732s
Dec 20 11:21:07.113: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 28.338685409s
Dec 20 11:21:09.130: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 30.35523775s
Dec 20 11:21:11.147: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 32.372097357s
Dec 20 11:21:13.169: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Running", Reason="", readiness=false. Elapsed: 34.393907734s
Dec 20 11:21:15.190: INFO: Pod "pod-subpath-test-downwardapi-x5tn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.414856681s
STEP: Saw pod success
Dec 20 11:21:15.190: INFO: Pod "pod-subpath-test-downwardapi-x5tn" satisfied condition "success or failure"
Dec 20 11:21:15.198: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-x5tn container test-container-subpath-downwardapi-x5tn: 
STEP: delete the pod
Dec 20 11:21:15.945: INFO: Waiting for pod pod-subpath-test-downwardapi-x5tn to disappear
Dec 20 11:21:16.082: INFO: Pod pod-subpath-test-downwardapi-x5tn no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-x5tn
Dec 20 11:21:16.082: INFO: Deleting pod "pod-subpath-test-downwardapi-x5tn" in namespace "e2e-tests-subpath-npxqf"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:21:16.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-npxqf" for this suite.
Dec 20 11:21:22.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:21:22.186: INFO: namespace: e2e-tests-subpath-npxqf, resource: bindings, ignored listing per whitelist
Dec 20 11:21:22.364: INFO: namespace e2e-tests-subpath-npxqf deletion completed in 6.268975208s

• [SLOW TEST:43.955 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:21:22.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-xn2vp
Dec 20 11:21:32.887: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-xn2vp
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 11:21:32.898: INFO: Initial restart count of pod liveness-exec is 0
Dec 20 11:22:29.742: INFO: Restart count of pod e2e-tests-container-probe-xn2vp/liveness-exec is now 1 (56.843573049s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:22:29.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-xn2vp" for this suite.
Dec 20 11:22:37.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:22:38.000: INFO: namespace: e2e-tests-container-probe-xn2vp, resource: bindings, ignored listing per whitelist
Dec 20 11:22:38.279: INFO: namespace e2e-tests-container-probe-xn2vp deletion completed in 8.457084001s

• [SLOW TEST:75.914 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:22:38.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 11:22:38.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-xsgrf'
Dec 20 11:22:40.745: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 11:22:40.745: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 20 11:22:40.759: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 20 11:22:41.019: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 20 11:22:41.057: INFO: scanned /root for discovery docs: 
Dec 20 11:22:41.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-xsgrf'
Dec 20 11:23:05.767: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 20 11:23:05.767: INFO: stdout: "Created e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053\nScaling up e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 20 11:23:05.767: INFO: stdout: "Created e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053\nScaling up e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 20 11:23:05.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xsgrf'
Dec 20 11:23:05.938: INFO: stderr: ""
Dec 20 11:23:05.938: INFO: stdout: "e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053-9hk6z "
Dec 20 11:23:05.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053-9hk6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xsgrf'
Dec 20 11:23:06.092: INFO: stderr: ""
Dec 20 11:23:06.092: INFO: stdout: "true"
Dec 20 11:23:06.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053-9hk6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xsgrf'
Dec 20 11:23:06.371: INFO: stderr: ""
Dec 20 11:23:06.371: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 20 11:23:06.371: INFO: e2e-test-nginx-rc-cf2b24013dc92ed7b4122ca114c57053-9hk6z is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 20 11:23:06.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xsgrf'
Dec 20 11:23:06.558: INFO: stderr: ""
Dec 20 11:23:06.558: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:23:06.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xsgrf" for this suite.
Dec 20 11:23:34.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:23:34.758: INFO: namespace: e2e-tests-kubectl-xsgrf, resource: bindings, ignored listing per whitelist
Dec 20 11:23:34.832: INFO: namespace e2e-tests-kubectl-xsgrf deletion completed in 28.237166364s

• [SLOW TEST:56.553 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:23:34.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-293a88e8-231b-11ea-851f-0242ac110004
STEP: Creating configMap with name cm-test-opt-upd-293a8add-231b-11ea-851f-0242ac110004
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-293a88e8-231b-11ea-851f-0242ac110004
STEP: Updating configmap cm-test-opt-upd-293a8add-231b-11ea-851f-0242ac110004
STEP: Creating configMap with name cm-test-opt-create-293a8b33-231b-11ea-851f-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:23:53.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rw7b4" for this suite.
Dec 20 11:24:17.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:24:17.576: INFO: namespace: e2e-tests-configmap-rw7b4, resource: bindings, ignored listing per whitelist
Dec 20 11:24:17.658: INFO: namespace e2e-tests-configmap-rw7b4 deletion completed in 24.233154133s

• [SLOW TEST:42.826 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:24:17.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 11:24:17.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-5l9gf" to be "success or failure"
Dec 20 11:24:17.882: INFO: Pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 38.560026ms
Dec 20 11:24:20.209: INFO: Pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365859062s
Dec 20 11:24:22.230: INFO: Pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386915896s
Dec 20 11:24:24.247: INFO: Pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404382166s
Dec 20 11:24:26.268: INFO: Pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424884301s
Dec 20 11:24:28.298: INFO: Pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.454772673s
STEP: Saw pod success
Dec 20 11:24:28.298: INFO: Pod "downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:24:28.305: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 11:24:28.662: INFO: Waiting for pod downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004 to disappear
Dec 20 11:24:28.895: INFO: Pod downwardapi-volume-42b21b59-231b-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:24:28.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5l9gf" for this suite.
Dec 20 11:24:34.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:24:35.179: INFO: namespace: e2e-tests-projected-5l9gf, resource: bindings, ignored listing per whitelist
Dec 20 11:24:35.371: INFO: namespace e2e-tests-projected-5l9gf deletion completed in 6.46176456s

• [SLOW TEST:17.712 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:24:35.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 20 11:24:46.655: INFO: Successfully updated pod "labelsupdate4d724e42-231b-11ea-851f-0242ac110004"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:24:48.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hhqbc" for this suite.
Dec 20 11:25:12.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:25:13.062: INFO: namespace: e2e-tests-downward-api-hhqbc, resource: bindings, ignored listing per whitelist
Dec 20 11:25:13.070: INFO: namespace e2e-tests-downward-api-hhqbc deletion completed in 24.305955397s

• [SLOW TEST:37.699 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:25:13.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 20 11:25:23.450: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-63cfc3f4-231b-11ea-851f-0242ac110004,GenerateName:,Namespace:e2e-tests-events-7kgqx,SelfLink:/api/v1/namespaces/e2e-tests-events-7kgqx/pods/send-events-63cfc3f4-231b-11ea-851f-0242ac110004,UID:63d171b9-231b-11ea-a994-fa163e34d433,ResourceVersion:15448393,Generation:0,CreationTimestamp:2019-12-20 11:25:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 387159944,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qwjfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qwjfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-qwjfk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f5e070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f5e0a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:25:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:25:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:25:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:25:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-20 11:25:13 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-20 11:25:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://699ca2fd34d0d159ba7bcfeb34c6a11dceb3a2dc9fe31b2fb44c6fb189052980}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 20 11:25:25.488: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 20 11:25:27.504: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:25:27.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-7kgqx" for this suite.
Dec 20 11:26:07.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:26:07.715: INFO: namespace: e2e-tests-events-7kgqx, resource: bindings, ignored listing per whitelist
Dec 20 11:26:07.845: INFO: namespace e2e-tests-events-7kgqx deletion completed in 40.260759775s

• [SLOW TEST:54.774 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:26:07.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-84703458-231b-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 11:26:08.147: INFO: Waiting up to 5m0s for pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-hwcth" to be "success or failure"
Dec 20 11:26:08.158: INFO: Pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.314918ms
Dec 20 11:26:10.178: INFO: Pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031068774s
Dec 20 11:26:12.237: INFO: Pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089607519s
Dec 20 11:26:14.254: INFO: Pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107045976s
Dec 20 11:26:16.271: INFO: Pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123957924s
Dec 20 11:26:18.351: INFO: Pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.204029834s
STEP: Saw pod success
Dec 20 11:26:18.351: INFO: Pod "pod-secrets-84719313-231b-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:26:18.369: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-84719313-231b-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 11:26:18.521: INFO: Waiting for pod pod-secrets-84719313-231b-11ea-851f-0242ac110004 to disappear
Dec 20 11:26:18.566: INFO: Pod pod-secrets-84719313-231b-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:26:18.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hwcth" for this suite.
Dec 20 11:26:24.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:26:24.791: INFO: namespace: e2e-tests-secrets-hwcth, resource: bindings, ignored listing per whitelist
Dec 20 11:26:24.839: INFO: namespace e2e-tests-secrets-hwcth deletion completed in 6.199449122s

• [SLOW TEST:16.994 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:26:24.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 20 11:26:25.032: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 20 11:26:25.041: INFO: Waiting for terminating namespaces to be deleted...
Dec 20 11:26:25.044: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 20 11:26:25.055: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 20 11:26:25.055: INFO: 	Container coredns ready: true, restart count 0
Dec 20 11:26:25.055: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 11:26:25.055: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 11:26:25.055: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 11:26:25.055: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 20 11:26:25.055: INFO: 	Container coredns ready: true, restart count 0
Dec 20 11:26:25.055: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 20 11:26:25.055: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 20 11:26:25.055: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 11:26:25.055: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 20 11:26:25.055: INFO: 	Container weave ready: true, restart count 0
Dec 20 11:26:25.055: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e210b416917f39], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:26:26.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-nns7r" for this suite.
Dec 20 11:26:32.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:26:32.386: INFO: namespace: e2e-tests-sched-pred-nns7r, resource: bindings, ignored listing per whitelist
Dec 20 11:26:32.496: INFO: namespace e2e-tests-sched-pred-nns7r deletion completed in 6.281290466s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.657 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:26:32.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 20 11:26:32.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pnhfs'
Dec 20 11:26:33.508: INFO: stderr: ""
Dec 20 11:26:33.508: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 20 11:26:34.557: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:34.558: INFO: Found 0 / 1
Dec 20 11:26:35.742: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:35.742: INFO: Found 0 / 1
Dec 20 11:26:36.552: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:36.552: INFO: Found 0 / 1
Dec 20 11:26:37.564: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:37.564: INFO: Found 0 / 1
Dec 20 11:26:40.501: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:40.501: INFO: Found 0 / 1
Dec 20 11:26:40.854: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:40.855: INFO: Found 0 / 1
Dec 20 11:26:41.542: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:41.543: INFO: Found 0 / 1
Dec 20 11:26:42.537: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:42.537: INFO: Found 0 / 1
Dec 20 11:26:43.529: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:43.529: INFO: Found 0 / 1
Dec 20 11:26:44.560: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:44.560: INFO: Found 1 / 1
Dec 20 11:26:44.560: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 20 11:26:44.589: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:44.589: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 20 11:26:44.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-v98j5 --namespace=e2e-tests-kubectl-pnhfs -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 20 11:26:44.747: INFO: stderr: ""
Dec 20 11:26:44.747: INFO: stdout: "pod/redis-master-v98j5 patched\n"
STEP: checking annotations
Dec 20 11:26:44.765: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:26:44.765: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:26:44.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pnhfs" for this suite.
Dec 20 11:27:08.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:27:09.018: INFO: namespace: e2e-tests-kubectl-pnhfs, resource: bindings, ignored listing per whitelist
Dec 20 11:27:09.021: INFO: namespace e2e-tests-kubectl-pnhfs deletion completed in 24.209964795s

• [SLOW TEST:36.523 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:27:09.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 20 11:27:09.278: INFO: Number of nodes with available pods: 0
Dec 20 11:27:09.279: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:11.214: INFO: Number of nodes with available pods: 0
Dec 20 11:27:11.214: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:11.742: INFO: Number of nodes with available pods: 0
Dec 20 11:27:11.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:12.818: INFO: Number of nodes with available pods: 0
Dec 20 11:27:12.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:13.315: INFO: Number of nodes with available pods: 0
Dec 20 11:27:13.315: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:14.295: INFO: Number of nodes with available pods: 0
Dec 20 11:27:14.295: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:15.326: INFO: Number of nodes with available pods: 0
Dec 20 11:27:15.327: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:16.315: INFO: Number of nodes with available pods: 0
Dec 20 11:27:16.315: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:17.643: INFO: Number of nodes with available pods: 0
Dec 20 11:27:17.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:18.306: INFO: Number of nodes with available pods: 0
Dec 20 11:27:18.306: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:19.298: INFO: Number of nodes with available pods: 0
Dec 20 11:27:19.298: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:20.305: INFO: Number of nodes with available pods: 1
Dec 20 11:27:20.305: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 20 11:27:20.387: INFO: Number of nodes with available pods: 0
Dec 20 11:27:20.387: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:21.411: INFO: Number of nodes with available pods: 0
Dec 20 11:27:21.411: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:22.418: INFO: Number of nodes with available pods: 0
Dec 20 11:27:22.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:23.968: INFO: Number of nodes with available pods: 0
Dec 20 11:27:23.968: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:24.442: INFO: Number of nodes with available pods: 0
Dec 20 11:27:24.442: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:25.417: INFO: Number of nodes with available pods: 0
Dec 20 11:27:25.417: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:26.414: INFO: Number of nodes with available pods: 0
Dec 20 11:27:26.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:27.418: INFO: Number of nodes with available pods: 0
Dec 20 11:27:27.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:28.400: INFO: Number of nodes with available pods: 0
Dec 20 11:27:28.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:29.415: INFO: Number of nodes with available pods: 0
Dec 20 11:27:29.416: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:30.420: INFO: Number of nodes with available pods: 0
Dec 20 11:27:30.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:31.458: INFO: Number of nodes with available pods: 0
Dec 20 11:27:31.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:32.414: INFO: Number of nodes with available pods: 0
Dec 20 11:27:32.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:33.414: INFO: Number of nodes with available pods: 0
Dec 20 11:27:33.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:34.418: INFO: Number of nodes with available pods: 0
Dec 20 11:27:34.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:35.407: INFO: Number of nodes with available pods: 0
Dec 20 11:27:35.407: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:36.444: INFO: Number of nodes with available pods: 0
Dec 20 11:27:36.444: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:37.450: INFO: Number of nodes with available pods: 0
Dec 20 11:27:37.450: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:38.726: INFO: Number of nodes with available pods: 0
Dec 20 11:27:38.726: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:39.502: INFO: Number of nodes with available pods: 0
Dec 20 11:27:39.502: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:40.414: INFO: Number of nodes with available pods: 0
Dec 20 11:27:40.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 11:27:41.588: INFO: Number of nodes with available pods: 1
Dec 20 11:27:41.589: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-24t85, will wait for the garbage collector to delete the pods
Dec 20 11:27:41.702: INFO: Deleting DaemonSet.extensions daemon-set took: 45.391555ms
Dec 20 11:27:41.802: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.449737ms
Dec 20 11:27:52.720: INFO: Number of nodes with available pods: 0
Dec 20 11:27:52.720: INFO: Number of running nodes: 0, number of available pods: 0
Dec 20 11:27:52.747: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-24t85/daemonsets","resourceVersion":"15448700"},"items":null}

Dec 20 11:27:52.936: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-24t85/pods","resourceVersion":"15448700"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:27:53.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-24t85" for this suite.
Dec 20 11:27:59.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:27:59.164: INFO: namespace: e2e-tests-daemonsets-24t85, resource: bindings, ignored listing per whitelist
Dec 20 11:27:59.241: INFO: namespace e2e-tests-daemonsets-24t85 deletion completed in 6.23534101s

• [SLOW TEST:50.220 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:27:59.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-psdp
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 11:27:59.569: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-psdp" in namespace "e2e-tests-subpath-q5xkh" to be "success or failure"
Dec 20 11:27:59.590: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 20.494488ms
Dec 20 11:28:01.610: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040377038s
Dec 20 11:28:03.661: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091371953s
Dec 20 11:28:05.989: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418845769s
Dec 20 11:28:08.090: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.520227454s
Dec 20 11:28:10.099: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.529327376s
Dec 20 11:28:12.118: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.547972776s
Dec 20 11:28:14.379: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.809607498s
Dec 20 11:28:16.449: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.878870106s
Dec 20 11:28:18.468: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 18.898306678s
Dec 20 11:28:20.489: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 20.919112308s
Dec 20 11:28:22.512: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 22.942562742s
Dec 20 11:28:24.544: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 24.974038239s
Dec 20 11:28:26.570: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 26.999786298s
Dec 20 11:28:28.654: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 29.083869333s
Dec 20 11:28:30.667: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 31.09735324s
Dec 20 11:28:32.689: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Running", Reason="", readiness=false. Elapsed: 33.119211257s
Dec 20 11:28:34.752: INFO: Pod "pod-subpath-test-configmap-psdp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.182453156s
STEP: Saw pod success
Dec 20 11:28:34.753: INFO: Pod "pod-subpath-test-configmap-psdp" satisfied condition "success or failure"
Dec 20 11:28:34.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-psdp container test-container-subpath-configmap-psdp: 
STEP: delete the pod
Dec 20 11:28:34.982: INFO: Waiting for pod pod-subpath-test-configmap-psdp to disappear
Dec 20 11:28:34.993: INFO: Pod pod-subpath-test-configmap-psdp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-psdp
Dec 20 11:28:34.993: INFO: Deleting pod "pod-subpath-test-configmap-psdp" in namespace "e2e-tests-subpath-q5xkh"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:28:34.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-q5xkh" for this suite.
Dec 20 11:28:41.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:28:41.190: INFO: namespace: e2e-tests-subpath-q5xkh, resource: bindings, ignored listing per whitelist
Dec 20 11:28:41.241: INFO: namespace e2e-tests-subpath-q5xkh deletion completed in 6.234238454s

• [SLOW TEST:41.999 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:28:41.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 11:28:41.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-94xkb'
Dec 20 11:28:41.539: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 11:28:41.539: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 20 11:28:43.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-94xkb'
Dec 20 11:28:44.041: INFO: stderr: ""
Dec 20 11:28:44.041: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:28:44.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-94xkb" for this suite.
Dec 20 11:28:50.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:28:50.348: INFO: namespace: e2e-tests-kubectl-94xkb, resource: bindings, ignored listing per whitelist
Dec 20 11:28:50.381: INFO: namespace e2e-tests-kubectl-94xkb deletion completed in 6.329011466s

• [SLOW TEST:9.139 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:28:50.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 11:28:50.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:29:01.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6blbv" for this suite.
Dec 20 11:29:45.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:29:45.933: INFO: namespace: e2e-tests-pods-6blbv, resource: bindings, ignored listing per whitelist
Dec 20 11:29:46.122: INFO: namespace e2e-tests-pods-6blbv deletion completed in 44.827497649s

• [SLOW TEST:55.741 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:29:46.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-lfkjx
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-lfkjx
STEP: Deleting pre-stop pod
Dec 20 11:30:11.532: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:30:11.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-lfkjx" for this suite.
Dec 20 11:30:51.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:30:51.899: INFO: namespace: e2e-tests-prestop-lfkjx, resource: bindings, ignored listing per whitelist
Dec 20 11:30:52.093: INFO: namespace e2e-tests-prestop-lfkjx deletion completed in 40.37750096s

• [SLOW TEST:65.970 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:30:52.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 11:30:52.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-rj72z'
Dec 20 11:30:52.591: INFO: stderr: ""
Dec 20 11:30:52.591: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 20 11:31:02.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-rj72z -o json'
Dec 20 11:31:02.803: INFO: stderr: ""
Dec 20 11:31:02.803: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-20T11:30:52Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-rj72z\",\n        \"resourceVersion\": \"15449088\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-rj72z/pods/e2e-test-nginx-pod\",\n        \"uid\": \"2df791c3-231c-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-grhzp\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-grhzp\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-grhzp\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T11:30:52Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T11:31:00Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T11:31:00Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-20T11:30:52Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://c22adcf8c59d212573166785438dded617f65368c98f2f81d664105179ebdd6b\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-20T11:31:00Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-20T11:30:52Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 20 11:31:02.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-rj72z'
Dec 20 11:31:03.304: INFO: stderr: ""
Dec 20 11:31:03.305: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 20 11:31:03.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-rj72z'
Dec 20 11:31:12.426: INFO: stderr: ""
Dec 20 11:31:12.426: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:31:12.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rj72z" for this suite.
Dec 20 11:31:18.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:31:18.538: INFO: namespace: e2e-tests-kubectl-rj72z, resource: bindings, ignored listing per whitelist
Dec 20 11:31:18.711: INFO: namespace e2e-tests-kubectl-rj72z deletion completed in 6.27830219s

• [SLOW TEST:26.618 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:31:18.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 20 11:31:18.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 20 11:31:19.052: INFO: stderr: ""
Dec 20 11:31:19.052: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:31:19.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pbzlk" for this suite.
Dec 20 11:31:25.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:31:25.221: INFO: namespace: e2e-tests-kubectl-pbzlk, resource: bindings, ignored listing per whitelist
Dec 20 11:31:25.255: INFO: namespace e2e-tests-kubectl-pbzlk deletion completed in 6.196347289s

• [SLOW TEST:6.543 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:31:25.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:31:25.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pmfrg" for this suite.
Dec 20 11:31:49.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:31:49.923: INFO: namespace: e2e-tests-pods-pmfrg, resource: bindings, ignored listing per whitelist
Dec 20 11:31:49.973: INFO: namespace e2e-tests-pods-pmfrg deletion completed in 24.454707801s

• [SLOW TEST:24.718 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:31:49.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 20 11:31:50.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4v8lt'
Dec 20 11:31:50.593: INFO: stderr: ""
Dec 20 11:31:50.593: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 20 11:31:51.622: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:51.622: INFO: Found 0 / 1
Dec 20 11:31:52.613: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:52.613: INFO: Found 0 / 1
Dec 20 11:31:55.040: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:55.040: INFO: Found 0 / 1
Dec 20 11:31:55.690: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:55.690: INFO: Found 0 / 1
Dec 20 11:31:56.613: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:56.613: INFO: Found 0 / 1
Dec 20 11:31:58.139: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:58.139: INFO: Found 0 / 1
Dec 20 11:31:58.612: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:58.612: INFO: Found 0 / 1
Dec 20 11:31:59.886: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:31:59.886: INFO: Found 0 / 1
Dec 20 11:32:00.608: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:32:00.609: INFO: Found 0 / 1
Dec 20 11:32:01.608: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:32:01.608: INFO: Found 0 / 1
Dec 20 11:32:02.628: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:32:02.628: INFO: Found 1 / 1
Dec 20 11:32:02.628: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 20 11:32:02.642: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:32:02.642: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 20 11:32:02.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-m4jg6 redis-master --namespace=e2e-tests-kubectl-4v8lt'
Dec 20 11:32:02.816: INFO: stderr: ""
Dec 20 11:32:02.816: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Dec 11:32:00.778 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 11:32:00.779 # Server started, Redis version 3.2.12\n1:M 20 Dec 11:32:00.779 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 11:32:00.779 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 20 11:32:02.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m4jg6 redis-master --namespace=e2e-tests-kubectl-4v8lt --tail=1'
Dec 20 11:32:03.053: INFO: stderr: ""
Dec 20 11:32:03.053: INFO: stdout: "1:M 20 Dec 11:32:00.779 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 20 11:32:03.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m4jg6 redis-master --namespace=e2e-tests-kubectl-4v8lt --limit-bytes=1'
Dec 20 11:32:03.260: INFO: stderr: ""
Dec 20 11:32:03.260: INFO: stdout: " "
STEP: exposing timestamps
Dec 20 11:32:03.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m4jg6 redis-master --namespace=e2e-tests-kubectl-4v8lt --tail=1 --timestamps'
Dec 20 11:32:03.387: INFO: stderr: ""
Dec 20 11:32:03.387: INFO: stdout: "2019-12-20T11:32:00.780304967Z 1:M 20 Dec 11:32:00.779 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 20 11:32:05.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m4jg6 redis-master --namespace=e2e-tests-kubectl-4v8lt --since=1s'
Dec 20 11:32:06.097: INFO: stderr: ""
Dec 20 11:32:06.097: INFO: stdout: ""
Dec 20 11:32:06.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-m4jg6 redis-master --namespace=e2e-tests-kubectl-4v8lt --since=24h'
Dec 20 11:32:06.232: INFO: stderr: ""
Dec 20 11:32:06.233: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Dec 11:32:00.778 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 11:32:00.779 # Server started, Redis version 3.2.12\n1:M 20 Dec 11:32:00.779 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 11:32:00.779 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 20 11:32:06.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4v8lt'
Dec 20 11:32:06.628: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 11:32:06.628: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 20 11:32:06.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-4v8lt'
Dec 20 11:32:06.824: INFO: stderr: "No resources found.\n"
Dec 20 11:32:06.824: INFO: stdout: ""
Dec 20 11:32:06.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-4v8lt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 20 11:32:07.071: INFO: stderr: ""
Dec 20 11:32:07.071: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:32:07.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4v8lt" for this suite.
Dec 20 11:32:31.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:32:31.355: INFO: namespace: e2e-tests-kubectl-4v8lt, resource: bindings, ignored listing per whitelist
Dec 20 11:32:31.355: INFO: namespace e2e-tests-kubectl-4v8lt deletion completed in 24.274216458s

• [SLOW TEST:41.381 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:32:31.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 20 11:32:31.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 20 11:32:31.682: INFO: stderr: ""
Dec 20 11:32:31.682: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:32:31.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sgs9m" for this suite.
Dec 20 11:32:37.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:32:37.915: INFO: namespace: e2e-tests-kubectl-sgs9m, resource: bindings, ignored listing per whitelist
Dec 20 11:32:37.975: INFO: namespace e2e-tests-kubectl-sgs9m deletion completed in 6.269252361s

• [SLOW TEST:6.620 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:32:37.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 11:32:38.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-59wr5'
Dec 20 11:32:38.451: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 11:32:38.451: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 20 11:32:40.560: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-t2hjr]
Dec 20 11:32:40.560: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-t2hjr" in namespace "e2e-tests-kubectl-59wr5" to be "running and ready"
Dec 20 11:32:40.604: INFO: Pod "e2e-test-nginx-rc-t2hjr": Phase="Pending", Reason="", readiness=false. Elapsed: 43.903295ms
Dec 20 11:32:42.635: INFO: Pod "e2e-test-nginx-rc-t2hjr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075097418s
Dec 20 11:32:44.880: INFO: Pod "e2e-test-nginx-rc-t2hjr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320007055s
Dec 20 11:32:47.167: INFO: Pod "e2e-test-nginx-rc-t2hjr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.607281379s
Dec 20 11:32:49.191: INFO: Pod "e2e-test-nginx-rc-t2hjr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.631281692s
Dec 20 11:32:51.210: INFO: Pod "e2e-test-nginx-rc-t2hjr": Phase="Running", Reason="", readiness=true. Elapsed: 10.650264759s
Dec 20 11:32:51.210: INFO: Pod "e2e-test-nginx-rc-t2hjr" satisfied condition "running and ready"
Dec 20 11:32:51.210: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-t2hjr]
Dec 20 11:32:51.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-59wr5'
Dec 20 11:32:53.090: INFO: stderr: ""
Dec 20 11:32:53.090: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 20 11:32:53.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-59wr5'
Dec 20 11:32:53.231: INFO: stderr: ""
Dec 20 11:32:53.231: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:32:53.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-59wr5" for this suite.
Dec 20 11:33:15.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:33:15.401: INFO: namespace: e2e-tests-kubectl-59wr5, resource: bindings, ignored listing per whitelist
Dec 20 11:33:15.420: INFO: namespace e2e-tests-kubectl-59wr5 deletion completed in 22.168133425s

• [SLOW TEST:37.443 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:33:15.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 11:33:15.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 20 11:33:15.916: INFO: stderr: ""
Dec 20 11:33:15.916: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:33:15.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-77zq8" for this suite.
Dec 20 11:33:22.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:33:22.328: INFO: namespace: e2e-tests-kubectl-77zq8, resource: bindings, ignored listing per whitelist
Dec 20 11:33:22.360: INFO: namespace e2e-tests-kubectl-77zq8 deletion completed in 6.337950317s

• [SLOW TEST:6.939 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:33:22.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 20 11:33:22.644: INFO: Waiting up to 5m0s for pod "pod-876c67f2-231c-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-29bcn" to be "success or failure"
Dec 20 11:33:22.738: INFO: Pod "pod-876c67f2-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 93.514994ms
Dec 20 11:33:24.767: INFO: Pod "pod-876c67f2-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122304134s
Dec 20 11:33:26.780: INFO: Pod "pod-876c67f2-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135753427s
Dec 20 11:33:29.402: INFO: Pod "pod-876c67f2-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.75725337s
Dec 20 11:33:31.415: INFO: Pod "pod-876c67f2-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.770304973s
Dec 20 11:33:33.441: INFO: Pod "pod-876c67f2-231c-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.79687394s
STEP: Saw pod success
Dec 20 11:33:33.442: INFO: Pod "pod-876c67f2-231c-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:33:33.449: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-876c67f2-231c-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:33:34.599: INFO: Waiting for pod pod-876c67f2-231c-11ea-851f-0242ac110004 to disappear
Dec 20 11:33:34.647: INFO: Pod pod-876c67f2-231c-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:33:34.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-29bcn" for this suite.
Dec 20 11:33:40.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:33:40.892: INFO: namespace: e2e-tests-emptydir-29bcn, resource: bindings, ignored listing per whitelist
Dec 20 11:33:40.912: INFO: namespace e2e-tests-emptydir-29bcn deletion completed in 6.157615521s

• [SLOW TEST:18.552 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:33:40.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-927875e7-231c-11ea-851f-0242ac110004
STEP: Creating secret with name s-test-opt-upd-927876fe-231c-11ea-851f-0242ac110004
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-927875e7-231c-11ea-851f-0242ac110004
STEP: Updating secret s-test-opt-upd-927876fe-231c-11ea-851f-0242ac110004
STEP: Creating secret with name s-test-opt-create-92787724-231c-11ea-851f-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:33:59.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8h9bl" for this suite.
Dec 20 11:34:23.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:34:23.603: INFO: namespace: e2e-tests-secrets-8h9bl, resource: bindings, ignored listing per whitelist
Dec 20 11:34:23.934: INFO: namespace e2e-tests-secrets-8h9bl deletion completed in 24.424061365s

• [SLOW TEST:43.022 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:34:23.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ac2a7cb0-231c-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 11:34:24.517: INFO: Waiting up to 5m0s for pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-qcwtl" to be "success or failure"
Dec 20 11:34:24.672: INFO: Pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 154.344555ms
Dec 20 11:34:26.802: INFO: Pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28402071s
Dec 20 11:34:28.819: INFO: Pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301652202s
Dec 20 11:34:30.938: INFO: Pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420888164s
Dec 20 11:34:32.980: INFO: Pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.462525373s
Dec 20 11:34:35.013: INFO: Pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495334154s
STEP: Saw pod success
Dec 20 11:34:35.013: INFO: Pod "pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:34:35.049: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 11:34:35.351: INFO: Waiting for pod pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004 to disappear
Dec 20 11:34:35.364: INFO: Pod pod-secrets-ac47c09d-231c-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:34:35.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qcwtl" for this suite.
Dec 20 11:34:42.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:34:42.670: INFO: namespace: e2e-tests-secrets-qcwtl, resource: bindings, ignored listing per whitelist
Dec 20 11:34:42.681: INFO: namespace e2e-tests-secrets-qcwtl deletion completed in 7.295742769s
STEP: Destroying namespace "e2e-tests-secret-namespace-g6pc2" for this suite.
Dec 20 11:34:48.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:34:48.868: INFO: namespace: e2e-tests-secret-namespace-g6pc2, resource: bindings, ignored listing per whitelist
Dec 20 11:34:48.934: INFO: namespace e2e-tests-secret-namespace-g6pc2 deletion completed in 6.252803744s

• [SLOW TEST:24.999 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:34:48.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-bafcdeca-231c-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 11:34:49.159: INFO: Waiting up to 5m0s for pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-hr6x7" to be "success or failure"
Dec 20 11:34:49.165: INFO: Pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.801552ms
Dec 20 11:34:51.191: INFO: Pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032251707s
Dec 20 11:34:53.216: INFO: Pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056811457s
Dec 20 11:34:55.700: INFO: Pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.541178436s
Dec 20 11:34:58.282: INFO: Pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.12267144s
Dec 20 11:35:00.418: INFO: Pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.258737821s
STEP: Saw pod success
Dec 20 11:35:00.418: INFO: Pod "pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:35:00.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 20 11:35:00.685: INFO: Waiting for pod pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004 to disappear
Dec 20 11:35:00.750: INFO: Pod pod-configmaps-bafddec7-231c-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:35:00.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hr6x7" for this suite.
Dec 20 11:35:06.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:35:06.923: INFO: namespace: e2e-tests-configmap-hr6x7, resource: bindings, ignored listing per whitelist
Dec 20 11:35:06.993: INFO: namespace e2e-tests-configmap-hr6x7 deletion completed in 6.216049264s

• [SLOW TEST:18.057 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:35:06.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 20 11:35:07.387: INFO: Waiting up to 5m0s for pod "pod-c5dac50e-231c-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-c96r7" to be "success or failure"
Dec 20 11:35:07.401: INFO: Pod "pod-c5dac50e-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.50517ms
Dec 20 11:35:09.419: INFO: Pod "pod-c5dac50e-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031598803s
Dec 20 11:35:11.431: INFO: Pod "pod-c5dac50e-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043464894s
Dec 20 11:35:13.522: INFO: Pod "pod-c5dac50e-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13469829s
Dec 20 11:35:15.817: INFO: Pod "pod-c5dac50e-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429761583s
Dec 20 11:35:18.106: INFO: Pod "pod-c5dac50e-231c-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.718825234s
STEP: Saw pod success
Dec 20 11:35:18.106: INFO: Pod "pod-c5dac50e-231c-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:35:18.232: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c5dac50e-231c-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:35:18.320: INFO: Waiting for pod pod-c5dac50e-231c-11ea-851f-0242ac110004 to disappear
Dec 20 11:35:18.381: INFO: Pod pod-c5dac50e-231c-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:35:18.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-c96r7" for this suite.
Dec 20 11:35:26.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:35:26.493: INFO: namespace: e2e-tests-emptydir-c96r7, resource: bindings, ignored listing per whitelist
Dec 20 11:35:26.681: INFO: namespace e2e-tests-emptydir-c96r7 deletion completed in 8.292200295s

• [SLOW TEST:19.689 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:35:26.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 20 11:35:26.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ddq2g'
Dec 20 11:35:27.377: INFO: stderr: ""
Dec 20 11:35:27.377: INFO: stdout: "pod/pause created\n"
Dec 20 11:35:27.378: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 20 11:35:27.378: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-ddq2g" to be "running and ready"
Dec 20 11:35:27.435: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 56.943565ms
Dec 20 11:35:29.455: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077149097s
Dec 20 11:35:31.483: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104822817s
Dec 20 11:35:33.498: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120222546s
Dec 20 11:35:35.524: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146265923s
Dec 20 11:35:37.541: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.163490462s
Dec 20 11:35:37.541: INFO: Pod "pause" satisfied condition "running and ready"
Dec 20 11:35:37.542: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 20 11:35:37.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-ddq2g'
Dec 20 11:35:37.821: INFO: stderr: ""
Dec 20 11:35:37.821: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 20 11:35:37.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-ddq2g'
Dec 20 11:35:38.019: INFO: stderr: ""
Dec 20 11:35:38.019: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 20 11:35:38.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-ddq2g'
Dec 20 11:35:38.291: INFO: stderr: ""
Dec 20 11:35:38.291: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 20 11:35:38.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-ddq2g'
Dec 20 11:35:38.420: INFO: stderr: ""
Dec 20 11:35:38.420: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 20 11:35:38.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ddq2g'
Dec 20 11:35:38.666: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 11:35:38.666: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 20 11:35:38.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-ddq2g'
Dec 20 11:35:38.818: INFO: stderr: "No resources found.\n"
Dec 20 11:35:38.819: INFO: stdout: ""
Dec 20 11:35:38.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-ddq2g -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 20 11:35:38.970: INFO: stderr: ""
Dec 20 11:35:38.970: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:35:38.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ddq2g" for this suite.
Dec 20 11:35:45.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:35:45.492: INFO: namespace: e2e-tests-kubectl-ddq2g, resource: bindings, ignored listing per whitelist
Dec 20 11:35:45.521: INFO: namespace e2e-tests-kubectl-ddq2g deletion completed in 6.528841431s

• [SLOW TEST:18.839 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:35:45.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 20 11:35:56.514: INFO: Successfully updated pod "annotationupdatedcbd4d46-231c-11ea-851f-0242ac110004"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:35:58.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-srqj6" for this suite.
Dec 20 11:36:22.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:36:22.813: INFO: namespace: e2e-tests-downward-api-srqj6, resource: bindings, ignored listing per whitelist
Dec 20 11:36:23.013: INFO: namespace e2e-tests-downward-api-srqj6 deletion completed in 24.325664016s

• [SLOW TEST:37.490 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:36:23.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 11:36:23.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-rg6w2" to be "success or failure"
Dec 20 11:36:23.635: INFO: Pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 27.475923ms
Dec 20 11:36:26.420: INFO: Pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812459961s
Dec 20 11:36:28.468: INFO: Pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.860100719s
Dec 20 11:36:30.996: INFO: Pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.388073282s
Dec 20 11:36:33.010: INFO: Pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.40209111s
Dec 20 11:36:35.023: INFO: Pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.415362104s
STEP: Saw pod success
Dec 20 11:36:35.023: INFO: Pod "downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:36:35.028: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 11:36:35.488: INFO: Waiting for pod downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004 to disappear
Dec 20 11:36:35.838: INFO: Pod downwardapi-volume-f346899c-231c-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:36:35.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rg6w2" for this suite.
Dec 20 11:36:41.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:36:42.084: INFO: namespace: e2e-tests-downward-api-rg6w2, resource: bindings, ignored listing per whitelist
Dec 20 11:36:42.201: INFO: namespace e2e-tests-downward-api-rg6w2 deletion completed in 6.331304225s

• [SLOW TEST:19.188 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:36:42.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-kjwnr
Dec 20 11:36:50.418: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-kjwnr
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 11:36:50.441: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:40:51.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-kjwnr" for this suite.
Dec 20 11:41:00.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:41:00.256: INFO: namespace: e2e-tests-container-probe-kjwnr, resource: bindings, ignored listing per whitelist
Dec 20 11:41:00.358: INFO: namespace e2e-tests-container-probe-kjwnr deletion completed in 8.293273535s

• [SLOW TEST:258.157 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:41:00.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 11:41:00.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-5jfvd" to be "success or failure"
Dec 20 11:41:00.764: INFO: Pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.852889ms
Dec 20 11:41:02.778: INFO: Pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032879677s
Dec 20 11:41:04.805: INFO: Pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05979929s
Dec 20 11:41:06.824: INFO: Pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079347471s
Dec 20 11:41:08.855: INFO: Pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110241135s
Dec 20 11:41:10.881: INFO: Pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135960636s
STEP: Saw pod success
Dec 20 11:41:10.881: INFO: Pod "downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:41:10.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 11:41:11.158: INFO: Waiting for pod downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004 to disappear
Dec 20 11:41:11.168: INFO: Pod downwardapi-volume-98710f7b-231d-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:41:11.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5jfvd" for this suite.
Dec 20 11:41:17.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:41:17.483: INFO: namespace: e2e-tests-downward-api-5jfvd, resource: bindings, ignored listing per whitelist
Dec 20 11:41:17.499: INFO: namespace e2e-tests-downward-api-5jfvd deletion completed in 6.323252412s

• [SLOW TEST:17.140 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:41:17.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 11:41:17.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-vwd45" to be "success or failure"
Dec 20 11:41:17.810: INFO: Pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.792919ms
Dec 20 11:41:19.889: INFO: Pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094198214s
Dec 20 11:41:21.924: INFO: Pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129600033s
Dec 20 11:41:23.977: INFO: Pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18226499s
Dec 20 11:41:26.303: INFO: Pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.508690706s
Dec 20 11:41:28.339: INFO: Pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.543796785s
STEP: Saw pod success
Dec 20 11:41:28.339: INFO: Pod "downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:41:28.355: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 11:41:28.626: INFO: Waiting for pod downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004 to disappear
Dec 20 11:41:28.674: INFO: Pod downwardapi-volume-a29ba11f-231d-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:41:28.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vwd45" for this suite.
Dec 20 11:41:34.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:41:35.060: INFO: namespace: e2e-tests-projected-vwd45, resource: bindings, ignored listing per whitelist
Dec 20 11:41:35.128: INFO: namespace e2e-tests-projected-vwd45 deletion completed in 6.435744407s

• [SLOW TEST:17.628 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:41:35.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-ad167cc9-231d-11ea-851f-0242ac110004
STEP: Creating configMap with name cm-test-opt-upd-ad167e0d-231d-11ea-851f-0242ac110004
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ad167cc9-231d-11ea-851f-0242ac110004
STEP: Updating configmap cm-test-opt-upd-ad167e0d-231d-11ea-851f-0242ac110004
STEP: Creating configMap with name cm-test-opt-create-ad167e58-231d-11ea-851f-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:41:51.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j29tb" for this suite.
Dec 20 11:42:15.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:42:15.851: INFO: namespace: e2e-tests-projected-j29tb, resource: bindings, ignored listing per whitelist
Dec 20 11:42:16.016: INFO: namespace e2e-tests-projected-j29tb deletion completed in 24.275786875s

• [SLOW TEST:40.888 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:42:16.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 20 11:42:16.187: INFO: namespace e2e-tests-kubectl-x6n8j
Dec 20 11:42:16.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x6n8j'
Dec 20 11:42:16.648: INFO: stderr: ""
Dec 20 11:42:16.648: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 20 11:42:17.672: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:17.672: INFO: Found 0 / 1
Dec 20 11:42:18.674: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:18.674: INFO: Found 0 / 1
Dec 20 11:42:19.669: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:19.669: INFO: Found 0 / 1
Dec 20 11:42:20.676: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:20.677: INFO: Found 0 / 1
Dec 20 11:42:21.722: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:21.723: INFO: Found 0 / 1
Dec 20 11:42:22.967: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:22.967: INFO: Found 0 / 1
Dec 20 11:42:23.737: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:23.738: INFO: Found 0 / 1
Dec 20 11:42:24.657: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:24.657: INFO: Found 0 / 1
Dec 20 11:42:25.685: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:25.685: INFO: Found 0 / 1
Dec 20 11:42:26.663: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:26.663: INFO: Found 1 / 1
Dec 20 11:42:26.663: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 20 11:42:26.669: INFO: Selector matched 1 pods for map[app:redis]
Dec 20 11:42:26.669: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 20 11:42:26.669: INFO: wait on redis-master startup in e2e-tests-kubectl-x6n8j 
Dec 20 11:42:26.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5jvps redis-master --namespace=e2e-tests-kubectl-x6n8j'
Dec 20 11:42:26.854: INFO: stderr: ""
Dec 20 11:42:26.854: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 20 Dec 11:42:24.885 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Dec 11:42:24.885 # Server started, Redis version 3.2.12\n1:M 20 Dec 11:42:24.885 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Dec 11:42:24.885 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 20 11:42:26.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-x6n8j'
Dec 20 11:42:27.137: INFO: stderr: ""
Dec 20 11:42:27.137: INFO: stdout: "service/rm2 exposed\n"
Dec 20 11:42:27.146: INFO: Service rm2 in namespace e2e-tests-kubectl-x6n8j found.
STEP: exposing service
Dec 20 11:42:29.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-x6n8j'
Dec 20 11:42:29.489: INFO: stderr: ""
Dec 20 11:42:29.490: INFO: stdout: "service/rm3 exposed\n"
Dec 20 11:42:29.517: INFO: Service rm3 in namespace e2e-tests-kubectl-x6n8j found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:42:31.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x6n8j" for this suite.
Dec 20 11:42:57.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:42:57.655: INFO: namespace: e2e-tests-kubectl-x6n8j, resource: bindings, ignored listing per whitelist
Dec 20 11:42:57.771: INFO: namespace e2e-tests-kubectl-x6n8j deletion completed in 26.20423038s

• [SLOW TEST:41.754 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:42:57.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-de5aa64e-231d-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 11:42:57.984: INFO: Waiting up to 5m0s for pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-292nf" to be "success or failure"
Dec 20 11:42:57.994: INFO: Pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.225163ms
Dec 20 11:43:00.261: INFO: Pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277857603s
Dec 20 11:43:02.280: INFO: Pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296765037s
Dec 20 11:43:04.293: INFO: Pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309456923s
Dec 20 11:43:06.342: INFO: Pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35793466s
Dec 20 11:43:08.357: INFO: Pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.372890204s
STEP: Saw pod success
Dec 20 11:43:08.357: INFO: Pod "pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:43:08.365: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 20 11:43:09.082: INFO: Waiting for pod pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004 to disappear
Dec 20 11:43:09.306: INFO: Pod pod-configmaps-de5b42d1-231d-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:43:09.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-292nf" for this suite.
Dec 20 11:43:15.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:43:15.804: INFO: namespace: e2e-tests-configmap-292nf, resource: bindings, ignored listing per whitelist
Dec 20 11:43:15.936: INFO: namespace e2e-tests-configmap-292nf deletion completed in 6.614329702s

• [SLOW TEST:18.165 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:43:15.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 20 11:43:16.218: INFO: Waiting up to 5m0s for pod "pod-e92ffa08-231d-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-mxr6w" to be "success or failure"
Dec 20 11:43:16.237: INFO: Pod "pod-e92ffa08-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.752494ms
Dec 20 11:43:18.590: INFO: Pod "pod-e92ffa08-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371954201s
Dec 20 11:43:20.610: INFO: Pod "pod-e92ffa08-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391114609s
Dec 20 11:43:22.643: INFO: Pod "pod-e92ffa08-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424641614s
Dec 20 11:43:24.813: INFO: Pod "pod-e92ffa08-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.59493006s
Dec 20 11:43:26.980: INFO: Pod "pod-e92ffa08-231d-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.761857132s
STEP: Saw pod success
Dec 20 11:43:26.980: INFO: Pod "pod-e92ffa08-231d-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:43:27.000: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e92ffa08-231d-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:43:27.228: INFO: Waiting for pod pod-e92ffa08-231d-11ea-851f-0242ac110004 to disappear
Dec 20 11:43:27.254: INFO: Pod pod-e92ffa08-231d-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:43:27.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mxr6w" for this suite.
Dec 20 11:43:33.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:43:33.559: INFO: namespace: e2e-tests-emptydir-mxr6w, resource: bindings, ignored listing per whitelist
Dec 20 11:43:33.577: INFO: namespace e2e-tests-emptydir-mxr6w deletion completed in 6.3044099s

• [SLOW TEST:17.640 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:43:33.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1220 11:43:43.994925       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 11:43:43.995: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:43:43.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-v7j8m" for this suite.
Dec 20 11:43:50.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:43:50.130: INFO: namespace: e2e-tests-gc-v7j8m, resource: bindings, ignored listing per whitelist
Dec 20 11:43:50.163: INFO: namespace e2e-tests-gc-v7j8m deletion completed in 6.162525346s

• [SLOW TEST:16.586 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:43:50.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 20 11:43:50.449: INFO: Waiting up to 5m0s for pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-qb9dz" to be "success or failure"
Dec 20 11:43:50.465: INFO: Pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.810033ms
Dec 20 11:43:52.559: INFO: Pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109368154s
Dec 20 11:43:54.595: INFO: Pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145682693s
Dec 20 11:43:56.658: INFO: Pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208995215s
Dec 20 11:43:58.678: INFO: Pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228722997s
Dec 20 11:44:00.691: INFO: Pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.241638343s
STEP: Saw pod success
Dec 20 11:44:00.691: INFO: Pod "downward-api-fd9e2faf-231d-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:44:00.702: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-fd9e2faf-231d-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 11:44:03.654: INFO: Waiting for pod downward-api-fd9e2faf-231d-11ea-851f-0242ac110004 to disappear
Dec 20 11:44:04.069: INFO: Pod downward-api-fd9e2faf-231d-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:44:04.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qb9dz" for this suite.
Dec 20 11:44:10.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:44:10.152: INFO: namespace: e2e-tests-downward-api-qb9dz, resource: bindings, ignored listing per whitelist
Dec 20 11:44:10.259: INFO: namespace e2e-tests-downward-api-qb9dz deletion completed in 6.166902412s

• [SLOW TEST:20.095 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:44:10.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 11:44:10.512: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:44:11.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-zftqq" for this suite.
Dec 20 11:44:17.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:44:17.936: INFO: namespace: e2e-tests-custom-resource-definition-zftqq, resource: bindings, ignored listing per whitelist
Dec 20 11:44:18.050: INFO: namespace e2e-tests-custom-resource-definition-zftqq deletion completed in 6.265652183s

• [SLOW TEST:7.791 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:44:18.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 20 11:44:44.378: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:44.378: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:45.351: INFO: Exec stderr: ""
Dec 20 11:44:45.351: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:45.352: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:45.671: INFO: Exec stderr: ""
Dec 20 11:44:45.672: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:45.672: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:46.004: INFO: Exec stderr: ""
Dec 20 11:44:46.004: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:46.004: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:46.283: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 20 11:44:46.283: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:46.283: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:46.699: INFO: Exec stderr: ""
Dec 20 11:44:46.699: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:46.699: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:46.992: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 20 11:44:46.993: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:46.993: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:47.352: INFO: Exec stderr: ""
Dec 20 11:44:47.352: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:47.352: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:47.657: INFO: Exec stderr: ""
Dec 20 11:44:47.657: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:47.658: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:48.018: INFO: Exec stderr: ""
Dec 20 11:44:48.018: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xwxmc PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 11:44:48.018: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 11:44:48.284: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:44:48.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-xwxmc" for this suite.
Dec 20 11:45:34.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:45:34.609: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-xwxmc, resource: bindings, ignored listing per whitelist
Dec 20 11:45:34.688: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-xwxmc deletion completed in 46.392683648s

• [SLOW TEST:76.635 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:45:34.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 11:46:09.015: INFO: Container started at 2019-12-20 11:45:43 +0000 UTC, pod became ready at 2019-12-20 11:46:07 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:46:09.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-k8zj6" for this suite.
Dec 20 11:46:33.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:46:33.202: INFO: namespace: e2e-tests-container-probe-k8zj6, resource: bindings, ignored listing per whitelist
Dec 20 11:46:33.371: INFO: namespace e2e-tests-container-probe-k8zj6 deletion completed in 24.349240746s

• [SLOW TEST:58.683 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:46:33.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 20 11:46:33.643: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7ktqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7ktqh/configmaps/e2e-watch-test-watch-closed,UID:5ed4e322-231e-11ea-a994-fa163e34d433,ResourceVersion:15450916,Generation:0,CreationTimestamp:2019-12-20 11:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 20 11:46:33.644: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7ktqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7ktqh/configmaps/e2e-watch-test-watch-closed,UID:5ed4e322-231e-11ea-a994-fa163e34d433,ResourceVersion:15450917,Generation:0,CreationTimestamp:2019-12-20 11:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 20 11:46:33.700: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7ktqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7ktqh/configmaps/e2e-watch-test-watch-closed,UID:5ed4e322-231e-11ea-a994-fa163e34d433,ResourceVersion:15450918,Generation:0,CreationTimestamp:2019-12-20 11:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 20 11:46:33.700: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-7ktqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-7ktqh/configmaps/e2e-watch-test-watch-closed,UID:5ed4e322-231e-11ea-a994-fa163e34d433,ResourceVersion:15450919,Generation:0,CreationTimestamp:2019-12-20 11:46:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:46:33.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7ktqh" for this suite.
Dec 20 11:46:39.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:46:39.969: INFO: namespace: e2e-tests-watch-7ktqh, resource: bindings, ignored listing per whitelist
Dec 20 11:46:40.058: INFO: namespace e2e-tests-watch-7ktqh deletion completed in 6.279327777s

• [SLOW TEST:6.687 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:46:40.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1220 11:46:55.276024       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 11:46:55.276: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:46:55.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9v4cg" for this suite.
Dec 20 11:47:19.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:47:19.659: INFO: namespace: e2e-tests-gc-9v4cg, resource: bindings, ignored listing per whitelist
Dec 20 11:47:19.669: INFO: namespace e2e-tests-gc-9v4cg deletion completed in 24.388250642s

• [SLOW TEST:39.610 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:47:19.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 20 11:47:40.327: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 11:47:40.403: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 11:47:42.404: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 11:47:42.420: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 11:47:44.404: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 11:47:44.415: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 11:47:46.404: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 11:47:46.669: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 11:47:48.404: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 11:47:48.425: INFO: Pod pod-with-prestop-http-hook still exists
Dec 20 11:47:50.404: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 20 11:47:50.422: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:47:50.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vmk8z" for this suite.
Dec 20 11:48:14.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:48:14.886: INFO: namespace: e2e-tests-container-lifecycle-hook-vmk8z, resource: bindings, ignored listing per whitelist
Dec 20 11:48:14.912: INFO: namespace e2e-tests-container-lifecycle-hook-vmk8z deletion completed in 24.416112495s

• [SLOW TEST:55.243 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:48:14.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 20 11:48:15.054: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:48:15.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d4zqq" for this suite.
Dec 20 11:48:23.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:48:23.450: INFO: namespace: e2e-tests-kubectl-d4zqq, resource: bindings, ignored listing per whitelist
Dec 20 11:48:23.566: INFO: namespace e2e-tests-kubectl-d4zqq deletion completed in 8.350411014s

• [SLOW TEST:8.653 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:48:23.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-a09a7c5f-231e-11ea-851f-0242ac110004
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:48:38.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vcx95" for this suite.
Dec 20 11:49:02.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:49:02.315: INFO: namespace: e2e-tests-configmap-vcx95, resource: bindings, ignored listing per whitelist
Dec 20 11:49:02.350: INFO: namespace e2e-tests-configmap-vcx95 deletion completed in 24.226646291s

• [SLOW TEST:38.784 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:49:02.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:49:09.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-f9jx5" for this suite.
Dec 20 11:49:17.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:49:18.109: INFO: namespace: e2e-tests-namespaces-f9jx5, resource: bindings, ignored listing per whitelist
Dec 20 11:49:18.137: INFO: namespace e2e-tests-namespaces-f9jx5 deletion completed in 8.908504058s
STEP: Destroying namespace "e2e-tests-nsdeletetest-2bvwf" for this suite.
Dec 20 11:49:18.142: INFO: Namespace e2e-tests-nsdeletetest-2bvwf was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-vbfbg" for this suite.
Dec 20 11:49:24.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:49:24.314: INFO: namespace: e2e-tests-nsdeletetest-vbfbg, resource: bindings, ignored listing per whitelist
Dec 20 11:49:24.334: INFO: namespace e2e-tests-nsdeletetest-vbfbg deletion completed in 6.192579995s

• [SLOW TEST:21.983 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:49:24.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 20 11:49:35.582: INFO: Successfully updated pod "annotationupdatec4e3dac2-231e-11ea-851f-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:49:37.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-622d5" for this suite.
Dec 20 11:50:01.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:50:02.278: INFO: namespace: e2e-tests-projected-622d5, resource: bindings, ignored listing per whitelist
Dec 20 11:50:02.366: INFO: namespace e2e-tests-projected-622d5 deletion completed in 24.583509159s

• [SLOW TEST:38.030 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:50:02.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 20 11:50:02.821: INFO: Waiting up to 5m0s for pod "pod-db7f16ff-231e-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-gvp5q" to be "success or failure"
Dec 20 11:50:02.884: INFO: Pod "pod-db7f16ff-231e-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 63.213655ms
Dec 20 11:50:04.923: INFO: Pod "pod-db7f16ff-231e-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102004102s
Dec 20 11:50:07.104: INFO: Pod "pod-db7f16ff-231e-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283463585s
Dec 20 11:50:09.116: INFO: Pod "pod-db7f16ff-231e-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295088621s
Dec 20 11:50:11.228: INFO: Pod "pod-db7f16ff-231e-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.407455932s
Dec 20 11:50:13.446: INFO: Pod "pod-db7f16ff-231e-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.624798236s
STEP: Saw pod success
Dec 20 11:50:13.446: INFO: Pod "pod-db7f16ff-231e-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:50:13.453: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-db7f16ff-231e-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:50:13.661: INFO: Waiting for pod pod-db7f16ff-231e-11ea-851f-0242ac110004 to disappear
Dec 20 11:50:13.672: INFO: Pod pod-db7f16ff-231e-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:50:13.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gvp5q" for this suite.
Dec 20 11:50:19.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:50:20.057: INFO: namespace: e2e-tests-emptydir-gvp5q, resource: bindings, ignored listing per whitelist
Dec 20 11:50:20.076: INFO: namespace e2e-tests-emptydir-gvp5q deletion completed in 6.396051559s

• [SLOW TEST:17.710 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:50:20.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1220 11:51:01.669477       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 11:51:01.669: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:51:01.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-78q4x" for this suite.
Dec 20 11:51:11.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:51:12.816: INFO: namespace: e2e-tests-gc-78q4x, resource: bindings, ignored listing per whitelist
Dec 20 11:51:12.861: INFO: namespace e2e-tests-gc-78q4x deletion completed in 11.185861432s

• [SLOW TEST:52.785 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:51:12.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 20 11:51:13.355: INFO: Waiting up to 5m0s for pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004" in namespace "e2e-tests-containers-4pkg5" to be "success or failure"
Dec 20 11:51:13.393: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.90409ms
Dec 20 11:51:15.554: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198739731s
Dec 20 11:51:17.580: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224086521s
Dec 20 11:51:19.594: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238667454s
Dec 20 11:51:21.605: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249234602s
Dec 20 11:51:23.624: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.268935117s
Dec 20 11:51:25.661: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.30576859s
Dec 20 11:51:28.100: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.744345281s
Dec 20 11:51:30.114: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.758478185s
Dec 20 11:51:32.135: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.779107387s
Dec 20 11:51:34.932: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.576268939s
Dec 20 11:51:36.951: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.595607969s
STEP: Saw pod success
Dec 20 11:51:36.951: INFO: Pod "client-containers-05849c4e-231f-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:51:36.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-05849c4e-231f-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:51:37.624: INFO: Waiting for pod client-containers-05849c4e-231f-11ea-851f-0242ac110004 to disappear
Dec 20 11:51:38.105: INFO: Pod client-containers-05849c4e-231f-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:51:38.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4pkg5" for this suite.
Dec 20 11:51:44.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:51:44.495: INFO: namespace: e2e-tests-containers-4pkg5, resource: bindings, ignored listing per whitelist
Dec 20 11:51:44.548: INFO: namespace e2e-tests-containers-4pkg5 deletion completed in 6.431045912s

• [SLOW TEST:31.686 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:51:44.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-w2dkz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w2dkz to expose endpoints map[]
Dec 20 11:51:45.053: INFO: Get endpoints failed (19.729041ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 20 11:51:46.064: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w2dkz exposes endpoints map[] (1.03106649s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-w2dkz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w2dkz to expose endpoints map[pod1:[80]]
Dec 20 11:51:50.243: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.150794214s elapsed, will retry)
Dec 20 11:51:54.454: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w2dkz exposes endpoints map[pod1:[80]] (8.361009644s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-w2dkz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w2dkz to expose endpoints map[pod1:[80] pod2:[80]]
Dec 20 11:51:59.278: INFO: Unexpected endpoints: found map[19218811-231f-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.799572297s elapsed, will retry)
Dec 20 11:52:04.863: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w2dkz exposes endpoints map[pod1:[80] pod2:[80]] (10.385237095s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-w2dkz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w2dkz to expose endpoints map[pod2:[80]]
Dec 20 11:52:05.101: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w2dkz exposes endpoints map[pod2:[80]] (180.007971ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-w2dkz
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-w2dkz to expose endpoints map[]
Dec 20 11:52:06.413: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-w2dkz exposes endpoints map[] (1.265816943s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:52:08.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-w2dkz" for this suite.
Dec 20 11:52:30.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:52:30.229: INFO: namespace: e2e-tests-services-w2dkz, resource: bindings, ignored listing per whitelist
Dec 20 11:52:30.289: INFO: namespace e2e-tests-services-w2dkz deletion completed in 22.237871765s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:45.739 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:52:30.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-33a0fa0b-231f-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 11:52:30.668: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-ms2vz" to be "success or failure"
Dec 20 11:52:30.681: INFO: Pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.090659ms
Dec 20 11:52:32.720: INFO: Pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052082214s
Dec 20 11:52:34.770: INFO: Pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10116299s
Dec 20 11:52:37.289: INFO: Pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.620672102s
Dec 20 11:52:39.307: INFO: Pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.639071632s
Dec 20 11:52:41.330: INFO: Pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.661267478s
STEP: Saw pod success
Dec 20 11:52:41.330: INFO: Pod "pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:52:41.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 11:52:41.489: INFO: Waiting for pod pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004 to disappear
Dec 20 11:52:41.527: INFO: Pod pod-projected-configmaps-33aded48-231f-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:52:41.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ms2vz" for this suite.
Dec 20 11:52:47.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:52:47.847: INFO: namespace: e2e-tests-projected-ms2vz, resource: bindings, ignored listing per whitelist
Dec 20 11:52:47.857: INFO: namespace e2e-tests-projected-ms2vz deletion completed in 6.32116087s

• [SLOW TEST:17.568 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:52:47.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 11:52:48.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-bwgnr" to be "success or failure"
Dec 20 11:52:48.181: INFO: Pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.690434ms
Dec 20 11:52:50.192: INFO: Pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027285431s
Dec 20 11:52:52.214: INFO: Pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049835503s
Dec 20 11:52:54.227: INFO: Pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062769255s
Dec 20 11:52:56.455: INFO: Pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290323154s
Dec 20 11:52:58.594: INFO: Pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.429595667s
STEP: Saw pod success
Dec 20 11:52:58.594: INFO: Pod "downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:52:58.659: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 11:52:58.875: INFO: Waiting for pod downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004 to disappear
Dec 20 11:52:58.889: INFO: Pod downwardapi-volume-3e208134-231f-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:52:58.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bwgnr" for this suite.
Dec 20 11:53:04.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:53:04.961: INFO: namespace: e2e-tests-downward-api-bwgnr, resource: bindings, ignored listing per whitelist
Dec 20 11:53:05.103: INFO: namespace e2e-tests-downward-api-bwgnr deletion completed in 6.199121588s

• [SLOW TEST:17.243 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:53:05.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Dec 20 11:53:05.937: INFO: Waiting up to 5m0s for pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6" in namespace "e2e-tests-svcaccounts-p9htm" to be "success or failure"
Dec 20 11:53:06.020: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Pending", Reason="", readiness=false. Elapsed: 82.642969ms
Dec 20 11:53:08.034: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096651718s
Dec 20 11:53:10.048: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110780483s
Dec 20 11:53:12.062: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124662797s
Dec 20 11:53:14.170: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232833741s
Dec 20 11:53:16.679: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.742300407s
Dec 20 11:53:18.709: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.772373075s
Dec 20 11:53:20.731: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.793818432s
STEP: Saw pod success
Dec 20 11:53:20.731: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6" satisfied condition "success or failure"
Dec 20 11:53:20.739: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6 container token-test: 
STEP: delete the pod
Dec 20 11:53:20.834: INFO: Waiting for pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6 to disappear
Dec 20 11:53:20.853: INFO: Pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ctjb6 no longer exists
STEP: Creating a pod to test consume service account root CA
Dec 20 11:53:20.883: INFO: Waiting up to 5m0s for pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2" in namespace "e2e-tests-svcaccounts-p9htm" to be "success or failure"
Dec 20 11:53:20.948: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 65.167166ms
Dec 20 11:53:22.964: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081182842s
Dec 20 11:53:24.980: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09682903s
Dec 20 11:53:27.216: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.332985653s
Dec 20 11:53:30.162: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.279256154s
Dec 20 11:53:32.208: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.324572877s
Dec 20 11:53:34.261: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.378035219s
Dec 20 11:53:36.295: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.412013957s
Dec 20 11:53:38.308: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.425264712s
Dec 20 11:53:40.348: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.464821743s
STEP: Saw pod success
Dec 20 11:53:40.348: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2" satisfied condition "success or failure"
Dec 20 11:53:40.355: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2 container root-ca-test: 
STEP: delete the pod
Dec 20 11:53:40.640: INFO: Waiting for pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2 to disappear
Dec 20 11:53:40.886: INFO: Pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-4lqx2 no longer exists
STEP: Creating a pod to test consume service account namespace
Dec 20 11:53:40.958: INFO: Waiting up to 5m0s for pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq" in namespace "e2e-tests-svcaccounts-p9htm" to be "success or failure"
Dec 20 11:53:41.043: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 84.748178ms
Dec 20 11:53:43.399: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440127042s
Dec 20 11:53:45.412: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.45350666s
Dec 20 11:53:47.521: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562301193s
Dec 20 11:53:50.041: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 9.082362811s
Dec 20 11:53:52.101: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.142809115s
Dec 20 11:53:54.119: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.160652692s
Dec 20 11:53:56.140: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.18140403s
Dec 20 11:53:58.152: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.193795307s
STEP: Saw pod success
Dec 20 11:53:58.152: INFO: Pod "pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq" satisfied condition "success or failure"
Dec 20 11:53:58.158: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq container namespace-test: 
STEP: delete the pod
Dec 20 11:53:58.755: INFO: Waiting for pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq to disappear
Dec 20 11:53:58.772: INFO: Pod pod-service-account-48b4c096-231f-11ea-851f-0242ac110004-ql8cq no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:53:58.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-p9htm" for this suite.
Dec 20 11:54:07.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:54:07.088: INFO: namespace: e2e-tests-svcaccounts-p9htm, resource: bindings, ignored listing per whitelist
Dec 20 11:54:07.525: INFO: namespace e2e-tests-svcaccounts-p9htm deletion completed in 8.567072471s

• [SLOW TEST:62.422 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:54:07.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:54:07.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-fxhpv" for this suite.
Dec 20 11:54:13.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:54:14.132: INFO: namespace: e2e-tests-services-fxhpv, resource: bindings, ignored listing per whitelist
Dec 20 11:54:14.147: INFO: namespace e2e-tests-services-fxhpv deletion completed in 6.230792261s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.620 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:54:14.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 20 11:54:14.393: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-jxchq,SelfLink:/api/v1/namespaces/e2e-tests-watch-jxchq/configmaps/e2e-watch-test-resource-version,UID:717d7f09-231f-11ea-a994-fa163e34d433,ResourceVersion:15452143,Generation:0,CreationTimestamp:2019-12-20 11:54:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 20 11:54:14.393: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-jxchq,SelfLink:/api/v1/namespaces/e2e-tests-watch-jxchq/configmaps/e2e-watch-test-resource-version,UID:717d7f09-231f-11ea-a994-fa163e34d433,ResourceVersion:15452144,Generation:0,CreationTimestamp:2019-12-20 11:54:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:54:14.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-jxchq" for this suite.
Dec 20 11:54:20.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:54:20.616: INFO: namespace: e2e-tests-watch-jxchq, resource: bindings, ignored listing per whitelist
Dec 20 11:54:20.740: INFO: namespace e2e-tests-watch-jxchq deletion completed in 6.28058425s

• [SLOW TEST:6.593 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:54:20.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 20 11:54:20.930: INFO: Waiting up to 5m0s for pod "pod-756ad023-231f-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-r229c" to be "success or failure"
Dec 20 11:54:21.061: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 130.349107ms
Dec 20 11:54:23.089: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158629266s
Dec 20 11:54:25.112: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18219886s
Dec 20 11:54:27.397: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467204269s
Dec 20 11:54:29.411: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.480951675s
Dec 20 11:54:31.426: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.495328128s
Dec 20 11:54:33.648: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.717411107s
STEP: Saw pod success
Dec 20 11:54:33.648: INFO: Pod "pod-756ad023-231f-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 11:54:33.677: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-756ad023-231f-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 11:54:34.287: INFO: Waiting for pod pod-756ad023-231f-11ea-851f-0242ac110004 to disappear
Dec 20 11:54:34.332: INFO: Pod pod-756ad023-231f-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:54:34.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r229c" for this suite.
Dec 20 11:54:40.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:54:40.526: INFO: namespace: e2e-tests-emptydir-r229c, resource: bindings, ignored listing per whitelist
Dec 20 11:54:40.597: INFO: namespace e2e-tests-emptydir-r229c deletion completed in 6.254607645s

• [SLOW TEST:19.856 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:54:40.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 20 11:54:53.363: INFO: Successfully updated pod "pod-update-activedeadlineseconds-813f96ff-231f-11ea-851f-0242ac110004"
Dec 20 11:54:53.363: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-813f96ff-231f-11ea-851f-0242ac110004" in namespace "e2e-tests-pods-8tl7p" to be "terminated due to deadline exceeded"
Dec 20 11:54:53.389: INFO: Pod "pod-update-activedeadlineseconds-813f96ff-231f-11ea-851f-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 25.72506ms
Dec 20 11:54:55.421: INFO: Pod "pod-update-activedeadlineseconds-813f96ff-231f-11ea-851f-0242ac110004": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.057759874s
Dec 20 11:54:55.421: INFO: Pod "pod-update-activedeadlineseconds-813f96ff-231f-11ea-851f-0242ac110004" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:54:55.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8tl7p" for this suite.
Dec 20 11:55:01.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:55:01.527: INFO: namespace: e2e-tests-pods-8tl7p, resource: bindings, ignored listing per whitelist
Dec 20 11:55:01.815: INFO: namespace e2e-tests-pods-8tl7p deletion completed in 6.384320243s

• [SLOW TEST:21.218 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:55:01.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 11:55:02.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-g68sh'
Dec 20 11:55:04.123: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 11:55:04.124: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 20 11:55:08.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-g68sh'
Dec 20 11:55:08.505: INFO: stderr: ""
Dec 20 11:55:08.505: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:55:08.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g68sh" for this suite.
Dec 20 11:55:32.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:55:32.977: INFO: namespace: e2e-tests-kubectl-g68sh, resource: bindings, ignored listing per whitelist
Dec 20 11:55:32.977: INFO: namespace e2e-tests-kubectl-g68sh deletion completed in 24.328078806s

• [SLOW TEST:31.161 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:55:32.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xht89
Dec 20 11:55:45.272: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xht89
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 11:55:45.289: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 11:59:46.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-xht89" for this suite.
Dec 20 11:59:52.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 11:59:53.039: INFO: namespace: e2e-tests-container-probe-xht89, resource: bindings, ignored listing per whitelist
Dec 20 11:59:53.107: INFO: namespace e2e-tests-container-probe-xht89 deletion completed in 6.230733233s

• [SLOW TEST:260.130 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 11:59:53.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-w5zz2
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-w5zz2
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-w5zz2
Dec 20 11:59:53.439: INFO: Found 0 stateful pods, waiting for 1
Dec 20 12:00:03.456: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 20 12:00:03.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 12:00:04.418: INFO: stderr: ""
Dec 20 12:00:04.418: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 12:00:04.418: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 12:00:04.444: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 20 12:00:14.461: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 12:00:14.461: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 12:00:14.533: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:14.533: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:14.533: INFO: 
Dec 20 12:00:14.533: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 20 12:00:16.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974365981s
Dec 20 12:00:17.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.392952999s
Dec 20 12:00:18.383: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.260789869s
Dec 20 12:00:19.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.124604095s
Dec 20 12:00:20.464: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.113128098s
Dec 20 12:00:21.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.043906171s
Dec 20 12:00:23.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.774309593s
Dec 20 12:00:24.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 137.915135ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-w5zz2
Dec 20 12:00:25.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:00:26.627: INFO: stderr: ""
Dec 20 12:00:26.627: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 12:00:26.627: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 12:00:26.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:00:27.035: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 20 12:00:27.036: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 12:00:27.036: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 12:00:27.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:00:27.517: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 20 12:00:27.517: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 20 12:00:27.517: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 20 12:00:27.533: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:00:27.533: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:00:27.533: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 20 12:00:37.545: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:00:37.545: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:00:37.545: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 20 12:00:37.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 12:00:38.223: INFO: stderr: ""
Dec 20 12:00:38.223: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 12:00:38.223: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 12:00:38.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 12:00:38.809: INFO: stderr: ""
Dec 20 12:00:38.810: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 12:00:38.810: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 12:00:38.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 20 12:00:39.406: INFO: stderr: ""
Dec 20 12:00:39.406: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 20 12:00:39.406: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 20 12:00:39.406: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 12:00:39.415: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 20 12:00:49.447: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 12:00:49.447: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 12:00:49.447: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 20 12:00:49.576: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:49.576: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:49.576: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:49.576: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:49.576: INFO: 
Dec 20 12:00:49.577: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 20 12:00:52.428: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:52.428: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:52.429: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:52.429: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:52.429: INFO: 
Dec 20 12:00:52.429: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 20 12:00:53.494: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:53.494: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:53.494: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:53.494: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:53.494: INFO: 
Dec 20 12:00:53.494: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 20 12:00:54.519: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:54.519: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:54.520: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:54.520: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:54.520: INFO: 
Dec 20 12:00:54.520: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 20 12:00:55.846: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:55.846: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:55.846: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:55.846: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:55.846: INFO: 
Dec 20 12:00:55.846: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 20 12:00:56.894: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:56.894: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:56.894: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:56.894: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:56.894: INFO: 
Dec 20 12:00:56.894: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 20 12:00:58.792: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 20 12:00:58.792: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 11:59:53 +0000 UTC  }]
Dec 20 12:00:58.792: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:58.792: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:00:14 +0000 UTC  }]
Dec 20 12:00:58.792: INFO: 
Dec 20 12:00:58.792: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-w5zz2
Dec 20 12:00:59.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:01:00.185: INFO: rc: 1
Dec 20 12:01:00.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001f66570 exit status 1   true [0xc000ada460 0xc000ada478 0xc000ada490] [0xc000ada460 0xc000ada478 0xc000ada490] [0xc000ada470 0xc000ada488] [0x935700 0x935700] 0xc0013ef0e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 20 12:01:10.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:01:10.453: INFO: rc: 1
Dec 20 12:01:10.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001719080 exit status 1   true [0xc001da08f8 0xc001da0910 0xc001da0928] [0xc001da08f8 0xc001da0910 0xc001da0928] [0xc001da0908 0xc001da0920] [0x935700 0x935700] 0xc000f1e540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:01:20.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:01:20.623: INFO: rc: 1
Dec 20 12:01:20.624: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0001e5050 exit status 1   true [0xc001656390 0xc0016563a8 0xc0016563c0] [0xc001656390 0xc0016563a8 0xc0016563c0] [0xc0016563a0 0xc0016563b8] [0x935700 0x935700] 0xc001344600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:01:30.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:01:30.763: INFO: rc: 1
Dec 20 12:01:30.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000a1f8c0 exit status 1   true [0xc001498660 0xc001498678 0xc001498690] [0xc001498660 0xc001498678 0xc001498690] [0xc001498670 0xc001498688] [0x935700 0x935700] 0xc000c54480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:01:40.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:01:40.896: INFO: rc: 1
Dec 20 12:01:40.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d48b70 exit status 1   true [0xc000415ce8 0xc000415d40 0xc000415db0] [0xc000415ce8 0xc000415d40 0xc000415db0] [0xc000415d10 0xc000415d70] [0x935700 0x935700] 0xc002533020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:01:50.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:01:51.002: INFO: rc: 1
Dec 20 12:01:51.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000313110 exit status 1   true [0xc00000e2a8 0xc001498010 0xc001498028] [0xc00000e2a8 0xc001498010 0xc001498028] [0xc001498008 0xc001498020] [0x935700 0x935700] 0xc00130b5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:02:01.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:02:01.350: INFO: rc: 1
Dec 20 12:02:01.351: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7680 exit status 1   true [0xc001656000 0xc001656018 0xc001656030] [0xc001656000 0xc001656018 0xc001656030] [0xc001656010 0xc001656028] [0x935700 0x935700] 0xc0000b8000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:02:11.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:02:11.528: INFO: rc: 1
Dec 20 12:02:11.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e77a0 exit status 1   true [0xc001656038 0xc001656050 0xc001656068] [0xc001656038 0xc001656050 0xc001656068] [0xc001656048 0xc001656060] [0x935700 0x935700] 0xc0017d94a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:02:21.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:02:21.712: INFO: rc: 1
Dec 20 12:02:21.712: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e78c0 exit status 1   true [0xc001656070 0xc001656088 0xc0016560a0] [0xc001656070 0xc001656088 0xc0016560a0] [0xc001656080 0xc001656098] [0x935700 0x935700] 0xc0015e6540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:02:31.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:02:31.881: INFO: rc: 1
Dec 20 12:02:31.882: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7a10 exit status 1   true [0xc0016560a8 0xc0016560c0 0xc0016560d8] [0xc0016560a8 0xc0016560c0 0xc0016560d8] [0xc0016560b8 0xc0016560d0] [0x935700 0x935700] 0xc0015e6960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:02:41.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:02:42.040: INFO: rc: 1
Dec 20 12:02:42.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001144120 exit status 1   true [0xc000bac000 0xc000bac030 0xc000bac048] [0xc000bac000 0xc000bac030 0xc000bac048] [0xc000bac028 0xc000bac040] [0x935700 0x935700] 0xc000d686c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:02:52.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:02:52.227: INFO: rc: 1
Dec 20 12:02:52.227: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7b90 exit status 1   true [0xc0016560e0 0xc0016560f8 0xc001656110] [0xc0016560e0 0xc0016560f8 0xc001656110] [0xc0016560f0 0xc001656108] [0x935700 0x935700] 0xc0015e6c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:03:02.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:03:02.553: INFO: rc: 1
Dec 20 12:03:02.553: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7ce0 exit status 1   true [0xc001656118 0xc001656130 0xc001656148] [0xc001656118 0xc001656130 0xc001656148] [0xc001656128 0xc001656140] [0x935700 0x935700] 0xc0015e6f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:03:12.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:03:12.678: INFO: rc: 1
Dec 20 12:03:12.678: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001144270 exit status 1   true [0xc000bac050 0xc000bac090 0xc000bac0b8] [0xc000bac050 0xc000bac090 0xc000bac0b8] [0xc000bac088 0xc000bac0b0] [0x935700 0x935700] 0xc000d68a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:03:22.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:03:22.842: INFO: rc: 1
Dec 20 12:03:22.843: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7e00 exit status 1   true [0xc001656150 0xc001656168 0xc001656180] [0xc001656150 0xc001656168 0xc001656180] [0xc001656160 0xc001656178] [0x935700 0x935700] 0xc0015e76e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:03:32.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:03:32.987: INFO: rc: 1
Dec 20 12:03:32.987: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001144390 exit status 1   true [0xc000bac0c0 0xc000bac0d8 0xc000bac0f0] [0xc000bac0c0 0xc000bac0d8 0xc000bac0f0] [0xc000bac0d0 0xc000bac0e8] [0x935700 0x935700] 0xc000d68d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:03:42.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:03:43.117: INFO: rc: 1
Dec 20 12:03:43.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7f50 exit status 1   true [0xc001656190 0xc0016561a8 0xc0016561c0] [0xc001656190 0xc0016561a8 0xc0016561c0] [0xc0016561a0 0xc0016561b8] [0x935700 0x935700] 0xc0015e79e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:03:53.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:03:53.290: INFO: rc: 1
Dec 20 12:03:53.290: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001144150 exit status 1   true [0xc00000e2a8 0xc000bac028 0xc000bac040] [0xc00000e2a8 0xc000bac028 0xc000bac040] [0xc000bac020 0xc000bac038] [0x935700 0x935700] 0xc000abc7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:04:03.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:04:03.452: INFO: rc: 1
Dec 20 12:04:03.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000313140 exit status 1   true [0xc001498000 0xc001498018 0xc001498030] [0xc001498000 0xc001498018 0xc001498030] [0xc001498010 0xc001498028] [0x935700 0x935700] 0xc00130a660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:04:13.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:04:13.697: INFO: rc: 1
Dec 20 12:04:13.697: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0003132c0 exit status 1   true [0xc001498038 0xc001498050 0xc001498068] [0xc001498038 0xc001498050 0xc001498068] [0xc001498048 0xc001498060] [0x935700 0x935700] 0xc000d684e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:04:23.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:04:23.878: INFO: rc: 1
Dec 20 12:04:23.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0003133e0 exit status 1   true [0xc001498070 0xc001498088 0xc0014980b0] [0xc001498070 0xc001498088 0xc0014980b0] [0xc001498080 0xc0014980a8] [0x935700 0x935700] 0xc000d687e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:04:33.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:04:34.176: INFO: rc: 1
Dec 20 12:04:34.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e76b0 exit status 1   true [0xc001656000 0xc001656018 0xc001656030] [0xc001656000 0xc001656018 0xc001656030] [0xc001656010 0xc001656028] [0x935700 0x935700] 0xc0015e61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:04:44.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:04:44.330: INFO: rc: 1
Dec 20 12:04:44.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000313500 exit status 1   true [0xc0014980b8 0xc0014980d0 0xc0014980e8] [0xc0014980b8 0xc0014980d0 0xc0014980e8] [0xc0014980c8 0xc0014980e0] [0x935700 0x935700] 0xc000d68b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:04:54.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:04:54.469: INFO: rc: 1
Dec 20 12:04:54.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000313620 exit status 1   true [0xc0014980f0 0xc001498108 0xc001498120] [0xc0014980f0 0xc001498108 0xc001498120] [0xc001498100 0xc001498118] [0x935700 0x935700] 0xc000d69200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:05:04.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:05:04.688: INFO: rc: 1
Dec 20 12:05:04.688: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000313770 exit status 1   true [0xc001498128 0xc001498140 0xc001498158] [0xc001498128 0xc001498140 0xc001498158] [0xc001498138 0xc001498150] [0x935700 0x935700] 0xc000d69e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:05:14.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:05:14.861: INFO: rc: 1
Dec 20 12:05:14.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7830 exit status 1   true [0xc001656038 0xc001656050 0xc001656068] [0xc001656038 0xc001656050 0xc001656068] [0xc001656048 0xc001656060] [0x935700 0x935700] 0xc0015e66c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:05:24.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:05:25.005: INFO: rc: 1
Dec 20 12:05:25.006: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e79b0 exit status 1   true [0xc001656070 0xc001656088 0xc0016560a0] [0xc001656070 0xc001656088 0xc0016560a0] [0xc001656080 0xc001656098] [0x935700 0x935700] 0xc0015e6ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:05:35.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:05:35.176: INFO: rc: 1
Dec 20 12:05:35.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001144300 exit status 1   true [0xc000bac048 0xc000bac088 0xc000bac0b0] [0xc000bac048 0xc000bac088 0xc000bac0b0] [0xc000bac068 0xc000bac0a8] [0x935700 0x935700] 0xc0013561e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:05:45.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:05:45.364: INFO: rc: 1
Dec 20 12:05:45.364: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0004e7680 exit status 1   true [0xc001656000 0xc001656018 0xc001656030] [0xc001656000 0xc001656018 0xc001656030] [0xc001656010 0xc001656028] [0x935700 0x935700] 0xc00130b5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:05:55.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:05:55.499: INFO: rc: 1
Dec 20 12:05:55.500: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001144120 exit status 1   true [0xc000bac000 0xc000bac030 0xc000bac048] [0xc000bac000 0xc000bac030 0xc000bac048] [0xc000bac028 0xc000bac040] [0x935700 0x935700] 0xc0000b8000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 20 12:06:05.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w5zz2 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 20 12:06:05.725: INFO: rc: 1
Dec 20 12:06:05.726: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 20 12:06:05.726: INFO: Scaling statefulset ss to 0
Dec 20 12:06:05.749: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 20 12:06:05.752: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w5zz2
Dec 20 12:06:05.755: INFO: Scaling statefulset ss to 0
Dec 20 12:06:05.771: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 12:06:05.775: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:06:05.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-w5zz2" for this suite.
Dec 20 12:06:13.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:06:13.940: INFO: namespace: e2e-tests-statefulset-w5zz2, resource: bindings, ignored listing per whitelist
Dec 20 12:06:14.156: INFO: namespace e2e-tests-statefulset-w5zz2 deletion completed in 8.319286487s

• [SLOW TEST:381.048 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:06:14.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-1ea8c928-2321-11ea-851f-0242ac110004
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-1ea8c928-2321-11ea-851f-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:06:26.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2bw5j" for this suite.
Dec 20 12:06:50.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:06:50.876: INFO: namespace: e2e-tests-projected-2bw5j, resource: bindings, ignored listing per whitelist
Dec 20 12:06:50.884: INFO: namespace e2e-tests-projected-2bw5j deletion completed in 24.331721522s

• [SLOW TEST:36.727 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:06:50.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 20 12:06:51.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:06:53.290: INFO: stderr: ""
Dec 20 12:06:53.291: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 20 12:06:53.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:06:53.680: INFO: stderr: ""
Dec 20 12:06:53.680: INFO: stdout: "update-demo-nautilus-ntlcr "
STEP: Replicas for name=update-demo: expected=2 actual=1
Dec 20 12:06:58.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:06:58.919: INFO: stderr: ""
Dec 20 12:06:58.919: INFO: stdout: "update-demo-nautilus-4vndh update-demo-nautilus-ntlcr "
Dec 20 12:06:58.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vndh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:06:59.152: INFO: stderr: ""
Dec 20 12:06:59.152: INFO: stdout: ""
Dec 20 12:06:59.152: INFO: update-demo-nautilus-4vndh is created but not running
Dec 20 12:07:04.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:07:04.307: INFO: stderr: ""
Dec 20 12:07:04.307: INFO: stdout: "update-demo-nautilus-4vndh update-demo-nautilus-ntlcr "
Dec 20 12:07:04.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vndh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:07:04.415: INFO: stderr: ""
Dec 20 12:07:04.415: INFO: stdout: "true"
Dec 20 12:07:04.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vndh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:07:04.636: INFO: stderr: ""
Dec 20 12:07:04.636: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 12:07:04.636: INFO: validating pod update-demo-nautilus-4vndh
Dec 20 12:07:04.702: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 12:07:04.702: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 12:07:04.702: INFO: update-demo-nautilus-4vndh is verified up and running
Dec 20 12:07:04.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntlcr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:07:04.851: INFO: stderr: ""
Dec 20 12:07:04.851: INFO: stdout: "true"
Dec 20 12:07:04.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ntlcr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:07:05.017: INFO: stderr: ""
Dec 20 12:07:05.017: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 12:07:05.017: INFO: validating pod update-demo-nautilus-ntlcr
Dec 20 12:07:05.037: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 12:07:05.037: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 12:07:05.037: INFO: update-demo-nautilus-ntlcr is verified up and running
STEP: using delete to clean up resources
Dec 20 12:07:05.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:07:05.187: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 12:07:05.187: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 20 12:07:05.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-tgh6q'
Dec 20 12:07:05.359: INFO: stderr: "No resources found.\n"
Dec 20 12:07:05.359: INFO: stdout: ""
Dec 20 12:07:05.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-tgh6q -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 20 12:07:05.509: INFO: stderr: ""
Dec 20 12:07:05.510: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:07:05.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tgh6q" for this suite.
Dec 20 12:07:29.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:07:29.761: INFO: namespace: e2e-tests-kubectl-tgh6q, resource: bindings, ignored listing per whitelist
Dec 20 12:07:29.967: INFO: namespace e2e-tests-kubectl-tgh6q deletion completed in 24.413309467s

• [SLOW TEST:39.083 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:07:29.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-4be60fd3-2321-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 12:07:30.528: INFO: Waiting up to 5m0s for pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-tr8vr" to be "success or failure"
Dec 20 12:07:30.582: INFO: Pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 53.207084ms
Dec 20 12:07:32.604: INFO: Pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075371336s
Dec 20 12:07:34.614: INFO: Pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086069623s
Dec 20 12:07:36.904: INFO: Pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375927194s
Dec 20 12:07:39.399: INFO: Pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.870310201s
Dec 20 12:07:41.420: INFO: Pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.891300656s
STEP: Saw pod success
Dec 20 12:07:41.420: INFO: Pod "pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:07:41.427: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 12:07:41.725: INFO: Waiting for pod pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004 to disappear
Dec 20 12:07:41.737: INFO: Pod pod-secrets-4be7fd0b-2321-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:07:41.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tr8vr" for this suite.
Dec 20 12:07:47.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:07:47.994: INFO: namespace: e2e-tests-secrets-tr8vr, resource: bindings, ignored listing per whitelist
Dec 20 12:07:48.029: INFO: namespace e2e-tests-secrets-tr8vr deletion completed in 6.273757193s

• [SLOW TEST:18.061 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:07:48.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-th5fh
Dec 20 12:08:00.316: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-th5fh
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 12:08:00.323: INFO: Initial restart count of pod liveness-http is 0
Dec 20 12:08:23.569: INFO: Restart count of pod e2e-tests-container-probe-th5fh/liveness-http is now 1 (23.245938022s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:08:23.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-th5fh" for this suite.
Dec 20 12:08:30.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:08:30.272: INFO: namespace: e2e-tests-container-probe-th5fh, resource: bindings, ignored listing per whitelist
Dec 20 12:08:30.325: INFO: namespace e2e-tests-container-probe-th5fh deletion completed in 6.455515343s

• [SLOW TEST:42.295 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:08:30.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 20 12:08:30.616: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:08:54.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-898qr" for this suite.
Dec 20 12:09:18.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:09:18.496: INFO: namespace: e2e-tests-init-container-898qr, resource: bindings, ignored listing per whitelist
Dec 20 12:09:18.570: INFO: namespace e2e-tests-init-container-898qr deletion completed in 24.372831424s

• [SLOW TEST:48.245 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:09:18.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 20 12:09:18.970: INFO: Waiting up to 5m0s for pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004" in namespace "e2e-tests-var-expansion-l5jv6" to be "success or failure"
Dec 20 12:09:18.986: INFO: Pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.605583ms
Dec 20 12:09:21.270: INFO: Pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299327932s
Dec 20 12:09:23.297: INFO: Pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327069229s
Dec 20 12:09:25.424: INFO: Pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453360893s
Dec 20 12:09:27.441: INFO: Pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.471097133s
Dec 20 12:09:29.457: INFO: Pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.486514042s
STEP: Saw pod success
Dec 20 12:09:29.457: INFO: Pod "var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:09:29.461: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 12:09:30.310: INFO: Waiting for pod var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004 to disappear
Dec 20 12:09:30.321: INFO: Pod var-expansion-8cb20ae2-2321-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:09:30.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-l5jv6" for this suite.
Dec 20 12:09:36.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:09:36.725: INFO: namespace: e2e-tests-var-expansion-l5jv6, resource: bindings, ignored listing per whitelist
Dec 20 12:09:36.761: INFO: namespace e2e-tests-var-expansion-l5jv6 deletion completed in 6.428065249s

• [SLOW TEST:18.189 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:09:36.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-97729d99-2321-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 12:09:37.021: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-nbppq" to be "success or failure"
Dec 20 12:09:37.144: INFO: Pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 123.204747ms
Dec 20 12:09:39.514: INFO: Pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492444764s
Dec 20 12:09:41.530: INFO: Pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.509389293s
Dec 20 12:09:44.718: INFO: Pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.697284114s
Dec 20 12:09:46.729: INFO: Pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.708253214s
Dec 20 12:09:48.747: INFO: Pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.725480353s
STEP: Saw pod success
Dec 20 12:09:48.747: INFO: Pod "pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:09:48.755: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 12:09:48.936: INFO: Waiting for pod pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004 to disappear
Dec 20 12:09:48.946: INFO: Pod pod-projected-configmaps-977402ba-2321-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:09:48.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nbppq" for this suite.
Dec 20 12:09:55.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:09:55.277: INFO: namespace: e2e-tests-projected-nbppq, resource: bindings, ignored listing per whitelist
Dec 20 12:09:55.299: INFO: namespace e2e-tests-projected-nbppq deletion completed in 6.337172603s

• [SLOW TEST:18.538 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:09:55.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:10:09.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-d2l2q" for this suite.
Dec 20 12:10:36.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:10:36.428: INFO: namespace: e2e-tests-replication-controller-d2l2q, resource: bindings, ignored listing per whitelist
Dec 20 12:10:36.442: INFO: namespace e2e-tests-replication-controller-d2l2q deletion completed in 27.232235531s

• [SLOW TEST:41.142 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:10:36.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 20 12:10:36.697: INFO: Waiting up to 5m0s for pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-zvs6t" to be "success or failure"
Dec 20 12:10:36.705: INFO: Pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19543ms
Dec 20 12:10:38.789: INFO: Pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092116284s
Dec 20 12:10:40.820: INFO: Pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123260067s
Dec 20 12:10:42.945: INFO: Pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247806563s
Dec 20 12:10:44.956: INFO: Pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258919319s
Dec 20 12:10:46.975: INFO: Pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.278299777s
STEP: Saw pod success
Dec 20 12:10:46.975: INFO: Pod "downward-api-bb05435a-2321-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:10:46.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-bb05435a-2321-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 12:10:47.121: INFO: Waiting for pod downward-api-bb05435a-2321-11ea-851f-0242ac110004 to disappear
Dec 20 12:10:47.142: INFO: Pod downward-api-bb05435a-2321-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:10:47.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zvs6t" for this suite.
Dec 20 12:10:54.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:10:54.550: INFO: namespace: e2e-tests-downward-api-zvs6t, resource: bindings, ignored listing per whitelist
Dec 20 12:10:54.624: INFO: namespace e2e-tests-downward-api-zvs6t deletion completed in 7.467046528s

• [SLOW TEST:18.182 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:10:54.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:11:54.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-vdfdx" for this suite.
Dec 20 12:12:01.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:12:01.593: INFO: namespace: e2e-tests-container-runtime-vdfdx, resource: bindings, ignored listing per whitelist
Dec 20 12:12:01.873: INFO: namespace e2e-tests-container-runtime-vdfdx deletion completed in 7.420130871s

• [SLOW TEST:67.248 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:12:01.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 20 12:12:02.269: INFO: Waiting up to 5m0s for pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004" in namespace "e2e-tests-var-expansion-htrhd" to be "success or failure"
Dec 20 12:12:02.282: INFO: Pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.510973ms
Dec 20 12:12:04.656: INFO: Pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386659272s
Dec 20 12:12:06.678: INFO: Pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.408721896s
Dec 20 12:12:08.704: INFO: Pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4338716s
Dec 20 12:12:10.717: INFO: Pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447732567s
Dec 20 12:12:12.732: INFO: Pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.462237264s
STEP: Saw pod success
Dec 20 12:12:12.732: INFO: Pod "var-expansion-edf396ac-2321-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:12:12.737: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-edf396ac-2321-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 12:12:13.095: INFO: Waiting for pod var-expansion-edf396ac-2321-11ea-851f-0242ac110004 to disappear
Dec 20 12:12:13.108: INFO: Pod var-expansion-edf396ac-2321-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:12:13.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-htrhd" for this suite.
Dec 20 12:12:19.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:12:19.284: INFO: namespace: e2e-tests-var-expansion-htrhd, resource: bindings, ignored listing per whitelist
Dec 20 12:12:19.311: INFO: namespace e2e-tests-var-expansion-htrhd deletion completed in 6.197288239s

• [SLOW TEST:17.437 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:12:19.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 20 12:12:19.543: INFO: Waiting up to 5m0s for pod "pod-f845f000-2321-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-6ccnw" to be "success or failure"
Dec 20 12:12:19.559: INFO: Pod "pod-f845f000-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.91842ms
Dec 20 12:12:21.786: INFO: Pod "pod-f845f000-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242059487s
Dec 20 12:12:23.814: INFO: Pod "pod-f845f000-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270019085s
Dec 20 12:12:25.835: INFO: Pod "pod-f845f000-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.291651671s
Dec 20 12:12:27.861: INFO: Pod "pod-f845f000-2321-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31728746s
Dec 20 12:12:29.879: INFO: Pod "pod-f845f000-2321-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.335904524s
STEP: Saw pod success
Dec 20 12:12:29.880: INFO: Pod "pod-f845f000-2321-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:12:29.888: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f845f000-2321-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 12:12:30.472: INFO: Waiting for pod pod-f845f000-2321-11ea-851f-0242ac110004 to disappear
Dec 20 12:12:30.779: INFO: Pod pod-f845f000-2321-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:12:30.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6ccnw" for this suite.
Dec 20 12:12:38.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:12:39.209: INFO: namespace: e2e-tests-emptydir-6ccnw, resource: bindings, ignored listing per whitelist
Dec 20 12:12:39.256: INFO: namespace e2e-tests-emptydir-6ccnw deletion completed in 8.46488419s

• [SLOW TEST:19.945 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:12:39.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-042e3b20-2322-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 12:12:39.436: INFO: Waiting up to 5m0s for pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-xj6hl" to be "success or failure"
Dec 20 12:12:39.482: INFO: Pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 45.909388ms
Dec 20 12:12:41.578: INFO: Pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141829575s
Dec 20 12:12:43.609: INFO: Pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172965351s
Dec 20 12:12:45.742: INFO: Pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305880037s
Dec 20 12:12:47.752: INFO: Pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31663612s
Dec 20 12:12:49.769: INFO: Pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332805574s
STEP: Saw pod success
Dec 20 12:12:49.769: INFO: Pod "pod-secrets-042f1010-2322-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:12:49.777: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-042f1010-2322-11ea-851f-0242ac110004 container secret-env-test: 
STEP: delete the pod
Dec 20 12:12:50.759: INFO: Waiting for pod pod-secrets-042f1010-2322-11ea-851f-0242ac110004 to disappear
Dec 20 12:12:50.775: INFO: Pod pod-secrets-042f1010-2322-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:12:50.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xj6hl" for this suite.
Dec 20 12:12:58.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:12:58.923: INFO: namespace: e2e-tests-secrets-xj6hl, resource: bindings, ignored listing per whitelist
Dec 20 12:12:58.990: INFO: namespace e2e-tests-secrets-xj6hl deletion completed in 8.206064764s

• [SLOW TEST:19.733 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:12:58.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 12:12:59.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-kcrjv" to be "success or failure"
Dec 20 12:12:59.239: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 49.424846ms
Dec 20 12:13:01.259: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069451424s
Dec 20 12:13:03.285: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09510563s
Dec 20 12:13:05.489: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299572984s
Dec 20 12:13:07.984: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.794531669s
Dec 20 12:13:10.427: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.236970656s
Dec 20 12:13:12.775: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.585160534s
Dec 20 12:13:14.823: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.632656999s
STEP: Saw pod success
Dec 20 12:13:14.823: INFO: Pod "downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:13:14.839: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 12:13:15.200: INFO: Waiting for pod downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004 to disappear
Dec 20 12:13:15.212: INFO: Pod downwardapi-volume-0ff3d4e2-2322-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:13:15.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kcrjv" for this suite.
Dec 20 12:13:23.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:13:23.392: INFO: namespace: e2e-tests-projected-kcrjv, resource: bindings, ignored listing per whitelist
Dec 20 12:13:23.490: INFO: namespace e2e-tests-projected-kcrjv deletion completed in 8.270481304s

• [SLOW TEST:24.499 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:13:23.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-1ea5f5ad-2322-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 12:13:23.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-l6nkv" to be "success or failure"
Dec 20 12:13:23.962: INFO: Pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.9834ms
Dec 20 12:13:26.087: INFO: Pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142025428s
Dec 20 12:13:28.102: INFO: Pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157741456s
Dec 20 12:13:30.128: INFO: Pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18390582s
Dec 20 12:13:32.144: INFO: Pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199048695s
Dec 20 12:13:35.299: INFO: Pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.354538159s
STEP: Saw pod success
Dec 20 12:13:35.299: INFO: Pod "pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:13:35.317: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 12:13:36.090: INFO: Waiting for pod pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004 to disappear
Dec 20 12:13:36.103: INFO: Pod pod-projected-secrets-1ea90953-2322-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:13:36.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l6nkv" for this suite.
Dec 20 12:13:42.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:13:42.284: INFO: namespace: e2e-tests-projected-l6nkv, resource: bindings, ignored listing per whitelist
Dec 20 12:13:42.421: INFO: namespace e2e-tests-projected-l6nkv deletion completed in 6.303393142s

• [SLOW TEST:18.930 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:13:42.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hvwxp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hvwxp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 20 12:13:58.921: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:58.926: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:58.934: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:58.947: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:58.954: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:58.974: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:58.983: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:58.989: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.001: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.013: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.066: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.075: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.083: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.087: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.093: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.100: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.108: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.113: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.116: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.121: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004: the server could not find the requested resource (get pods dns-test-29e337fa-2322-11ea-851f-0242ac110004)
Dec 20 12:13:59.121: INFO: Lookups using e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hvwxp.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 20 12:14:04.365: INFO: DNS probes using e2e-tests-dns-hvwxp/dns-test-29e337fa-2322-11ea-851f-0242ac110004 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:14:04.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-hvwxp" for this suite.
Dec 20 12:14:12.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:14:12.867: INFO: namespace: e2e-tests-dns-hvwxp, resource: bindings, ignored listing per whitelist
Dec 20 12:14:12.915: INFO: namespace e2e-tests-dns-hvwxp deletion completed in 8.369796001s

• [SLOW TEST:30.494 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:14:12.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3c0f44c7-2322-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 12:14:13.257: INFO: Waiting up to 5m0s for pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-rrgct" to be "success or failure"
Dec 20 12:14:13.267: INFO: Pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.584339ms
Dec 20 12:14:15.447: INFO: Pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190321476s
Dec 20 12:14:17.461: INFO: Pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204610704s
Dec 20 12:14:19.661: INFO: Pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404031752s
Dec 20 12:14:21.683: INFO: Pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.426211888s
Dec 20 12:14:24.232: INFO: Pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.975224112s
STEP: Saw pod success
Dec 20 12:14:24.232: INFO: Pod "pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:14:24.242: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 12:14:24.552: INFO: Waiting for pod pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004 to disappear
Dec 20 12:14:24.627: INFO: Pod pod-secrets-3c17fe75-2322-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:14:24.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rrgct" for this suite.
Dec 20 12:14:30.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:14:30.945: INFO: namespace: e2e-tests-secrets-rrgct, resource: bindings, ignored listing per whitelist
Dec 20 12:14:31.009: INFO: namespace e2e-tests-secrets-rrgct deletion completed in 6.314666122s

• [SLOW TEST:18.094 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:14:31.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-skddc
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-skddc to expose endpoints map[]
Dec 20 12:14:31.454: INFO: Get endpoints failed (20.669217ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 20 12:14:32.481: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-skddc exposes endpoints map[] (1.047316322s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-skddc
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-skddc to expose endpoints map[pod1:[100]]
Dec 20 12:14:37.040: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.515383575s elapsed, will retry)
Dec 20 12:14:41.234: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-skddc exposes endpoints map[pod1:[100]] (8.709456416s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-skddc
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-skddc to expose endpoints map[pod1:[100] pod2:[101]]
Dec 20 12:14:46.792: INFO: Unexpected endpoints: found map[479666bd-2322-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (5.545183624s elapsed, will retry)
Dec 20 12:14:48.969: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-skddc exposes endpoints map[pod1:[100] pod2:[101]] (7.722804424s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-skddc
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-skddc to expose endpoints map[pod2:[101]]
Dec 20 12:14:50.130: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-skddc exposes endpoints map[pod2:[101]] (1.120340146s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-skddc
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-skddc to expose endpoints map[]
Dec 20 12:14:50.501: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-skddc exposes endpoints map[] (336.918996ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:14:50.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-skddc" for this suite.
Dec 20 12:15:14.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:15:14.933: INFO: namespace: e2e-tests-services-skddc, resource: bindings, ignored listing per whitelist
Dec 20 12:15:14.941: INFO: namespace e2e-tests-services-skddc deletion completed in 24.229162517s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:43.931 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:15:14.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gs74j
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 20 12:15:15.210: INFO: Found 0 stateful pods, waiting for 3
Dec 20 12:15:25.238: INFO: Found 2 stateful pods, waiting for 3
Dec 20 12:15:35.227: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:15:35.227: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:15:35.227: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 20 12:15:45.301: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:15:45.301: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:15:45.301: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 20 12:15:45.372: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 20 12:15:55.546: INFO: Updating stateful set ss2
Dec 20 12:15:55.701: INFO: Waiting for Pod e2e-tests-statefulset-gs74j/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 12:16:05.827: INFO: Waiting for Pod e2e-tests-statefulset-gs74j/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 20 12:16:16.398: INFO: Found 2 stateful pods, waiting for 3
Dec 20 12:16:26.425: INFO: Found 2 stateful pods, waiting for 3
Dec 20 12:16:36.427: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:16:36.427: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:16:36.427: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 20 12:16:46.419: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:16:46.419: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 20 12:16:46.419: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 20 12:16:46.509: INFO: Updating stateful set ss2
Dec 20 12:16:46.617: INFO: Waiting for Pod e2e-tests-statefulset-gs74j/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 12:16:56.915: INFO: Updating stateful set ss2
Dec 20 12:16:56.949: INFO: Waiting for StatefulSet e2e-tests-statefulset-gs74j/ss2 to complete update
Dec 20 12:16:56.949: INFO: Waiting for Pod e2e-tests-statefulset-gs74j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 12:17:06.987: INFO: Waiting for StatefulSet e2e-tests-statefulset-gs74j/ss2 to complete update
Dec 20 12:17:06.987: INFO: Waiting for Pod e2e-tests-statefulset-gs74j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 20 12:17:16.976: INFO: Waiting for StatefulSet e2e-tests-statefulset-gs74j/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 20 12:17:26.977: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gs74j
Dec 20 12:17:26.983: INFO: Scaling statefulset ss2 to 0
Dec 20 12:18:07.034: INFO: Waiting for statefulset status.replicas updated to 0
Dec 20 12:18:07.040: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:18:07.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gs74j" for this suite.
Dec 20 12:18:15.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:18:15.293: INFO: namespace: e2e-tests-statefulset-gs74j, resource: bindings, ignored listing per whitelist
Dec 20 12:18:15.511: INFO: namespace e2e-tests-statefulset-gs74j deletion completed in 8.424609744s

• [SLOW TEST:180.569 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:18:15.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 20 12:18:15.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:17.799: INFO: stderr: ""
Dec 20 12:18:17.799: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 20 12:18:17.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:18.215: INFO: stderr: ""
Dec 20 12:18:18.215: INFO: stdout: "update-demo-nautilus-9ssp8 update-demo-nautilus-fqptm "
Dec 20 12:18:18.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ssp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:18.400: INFO: stderr: ""
Dec 20 12:18:18.400: INFO: stdout: ""
Dec 20 12:18:18.401: INFO: update-demo-nautilus-9ssp8 is created but not running
Dec 20 12:18:23.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:23.555: INFO: stderr: ""
Dec 20 12:18:23.556: INFO: stdout: "update-demo-nautilus-9ssp8 update-demo-nautilus-fqptm "
Dec 20 12:18:23.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ssp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:23.686: INFO: stderr: ""
Dec 20 12:18:23.686: INFO: stdout: ""
Dec 20 12:18:23.686: INFO: update-demo-nautilus-9ssp8 is created but not running
Dec 20 12:18:28.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:28.832: INFO: stderr: ""
Dec 20 12:18:28.832: INFO: stdout: "update-demo-nautilus-9ssp8 update-demo-nautilus-fqptm "
Dec 20 12:18:28.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ssp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:28.966: INFO: stderr: ""
Dec 20 12:18:28.966: INFO: stdout: "true"
Dec 20 12:18:28.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ssp8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:29.061: INFO: stderr: ""
Dec 20 12:18:29.061: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 12:18:29.061: INFO: validating pod update-demo-nautilus-9ssp8
Dec 20 12:18:29.072: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 12:18:29.073: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 12:18:29.073: INFO: update-demo-nautilus-9ssp8 is verified up and running
Dec 20 12:18:29.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqptm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:29.195: INFO: stderr: ""
Dec 20 12:18:29.195: INFO: stdout: ""
Dec 20 12:18:29.195: INFO: update-demo-nautilus-fqptm is created but not running
Dec 20 12:18:34.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:34.304: INFO: stderr: ""
Dec 20 12:18:34.304: INFO: stdout: "update-demo-nautilus-9ssp8 update-demo-nautilus-fqptm "
Dec 20 12:18:34.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ssp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:34.457: INFO: stderr: ""
Dec 20 12:18:34.457: INFO: stdout: "true"
Dec 20 12:18:34.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ssp8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:34.583: INFO: stderr: ""
Dec 20 12:18:34.584: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 12:18:34.584: INFO: validating pod update-demo-nautilus-9ssp8
Dec 20 12:18:34.596: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 12:18:34.596: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 12:18:34.596: INFO: update-demo-nautilus-9ssp8 is verified up and running
Dec 20 12:18:34.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqptm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:34.714: INFO: stderr: ""
Dec 20 12:18:34.714: INFO: stdout: "true"
Dec 20 12:18:34.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fqptm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:18:34.808: INFO: stderr: ""
Dec 20 12:18:34.808: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 12:18:34.808: INFO: validating pod update-demo-nautilus-fqptm
Dec 20 12:18:34.818: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 12:18:34.818: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 12:18:34.818: INFO: update-demo-nautilus-fqptm is verified up and running
STEP: rolling-update to new replication controller
Dec 20 12:18:34.822: INFO: scanned /root for discovery docs: 
Dec 20 12:18:34.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:19:10.255: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 20 12:19:10.255: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 20 12:19:10.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:19:10.478: INFO: stderr: ""
Dec 20 12:19:10.478: INFO: stdout: "update-demo-kitten-78xl7 update-demo-kitten-l84nf update-demo-nautilus-9ssp8 "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 20 12:19:15.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:19:15.738: INFO: stderr: ""
Dec 20 12:19:15.738: INFO: stdout: "update-demo-kitten-78xl7 update-demo-kitten-l84nf "
Dec 20 12:19:15.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-78xl7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:19:15.913: INFO: stderr: ""
Dec 20 12:19:15.913: INFO: stdout: "true"
Dec 20 12:19:15.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-78xl7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:19:16.084: INFO: stderr: ""
Dec 20 12:19:16.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 20 12:19:16.084: INFO: validating pod update-demo-kitten-78xl7
Dec 20 12:19:16.107: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 20 12:19:16.107: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 20 12:19:16.107: INFO: update-demo-kitten-78xl7 is verified up and running
Dec 20 12:19:16.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l84nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:19:16.207: INFO: stderr: ""
Dec 20 12:19:16.208: INFO: stdout: "true"
Dec 20 12:19:16.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l84nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wmclw'
Dec 20 12:19:16.335: INFO: stderr: ""
Dec 20 12:19:16.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 20 12:19:16.335: INFO: validating pod update-demo-kitten-l84nf
Dec 20 12:19:16.349: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 20 12:19:16.349: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 20 12:19:16.349: INFO: update-demo-kitten-l84nf is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:19:16.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wmclw" for this suite.
Dec 20 12:19:42.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:19:42.629: INFO: namespace: e2e-tests-kubectl-wmclw, resource: bindings, ignored listing per whitelist
Dec 20 12:19:42.684: INFO: namespace e2e-tests-kubectl-wmclw deletion completed in 26.327857601s

• [SLOW TEST:87.173 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:19:42.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 20 12:20:03.290: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 20 12:20:03.306: INFO: Pod pod-with-poststart-http-hook still exists
Dec 20 12:20:05.306: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 20 12:20:05.324: INFO: Pod pod-with-poststart-http-hook still exists
Dec 20 12:20:07.306: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 20 12:20:07.321: INFO: Pod pod-with-poststart-http-hook still exists
Dec 20 12:20:09.306: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 20 12:20:09.328: INFO: Pod pod-with-poststart-http-hook still exists
Dec 20 12:20:11.306: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 20 12:20:11.325: INFO: Pod pod-with-poststart-http-hook still exists
Dec 20 12:20:13.306: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 20 12:20:13.325: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:20:13.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-bcdf4" for this suite.
Dec 20 12:20:35.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:20:35.499: INFO: namespace: e2e-tests-container-lifecycle-hook-bcdf4, resource: bindings, ignored listing per whitelist
Dec 20 12:20:35.593: INFO: namespace e2e-tests-container-lifecycle-hook-bcdf4 deletion completed in 22.255824343s

• [SLOW TEST:52.909 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:20:35.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 20 12:20:36.774: INFO: Pod name wrapped-volume-race-20a9ca94-2323-11ea-851f-0242ac110004: Found 0 pods out of 5
Dec 20 12:20:41.802: INFO: Pod name wrapped-volume-race-20a9ca94-2323-11ea-851f-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-20a9ca94-2323-11ea-851f-0242ac110004 in namespace e2e-tests-emptydir-wrapper-j2hjr, will wait for the garbage collector to delete the pods
Dec 20 12:22:34.109: INFO: Deleting ReplicationController wrapped-volume-race-20a9ca94-2323-11ea-851f-0242ac110004 took: 65.010207ms
Dec 20 12:22:34.610: INFO: Terminating ReplicationController wrapped-volume-race-20a9ca94-2323-11ea-851f-0242ac110004 pods took: 500.747035ms
STEP: Creating RC which spawns configmap-volume pods
Dec 20 12:23:23.181: INFO: Pod name wrapped-volume-race-83d96634-2323-11ea-851f-0242ac110004: Found 0 pods out of 5
Dec 20 12:23:28.209: INFO: Pod name wrapped-volume-race-83d96634-2323-11ea-851f-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-83d96634-2323-11ea-851f-0242ac110004 in namespace e2e-tests-emptydir-wrapper-j2hjr, will wait for the garbage collector to delete the pods
Dec 20 12:25:42.542: INFO: Deleting ReplicationController wrapped-volume-race-83d96634-2323-11ea-851f-0242ac110004 took: 93.757053ms
Dec 20 12:25:42.843: INFO: Terminating ReplicationController wrapped-volume-race-83d96634-2323-11ea-851f-0242ac110004 pods took: 301.506418ms
STEP: Creating RC which spawns configmap-volume pods
Dec 20 12:26:33.936: INFO: Pod name wrapped-volume-race-f579c813-2323-11ea-851f-0242ac110004: Found 0 pods out of 5
Dec 20 12:26:38.969: INFO: Pod name wrapped-volume-race-f579c813-2323-11ea-851f-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f579c813-2323-11ea-851f-0242ac110004 in namespace e2e-tests-emptydir-wrapper-j2hjr, will wait for the garbage collector to delete the pods
Dec 20 12:28:23.309: INFO: Deleting ReplicationController wrapped-volume-race-f579c813-2323-11ea-851f-0242ac110004 took: 25.789748ms
Dec 20 12:28:24.010: INFO: Terminating ReplicationController wrapped-volume-race-f579c813-2323-11ea-851f-0242ac110004 pods took: 701.054622ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:29:14.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-j2hjr" for this suite.
Dec 20 12:29:22.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:29:22.398: INFO: namespace: e2e-tests-emptydir-wrapper-j2hjr, resource: bindings, ignored listing per whitelist
Dec 20 12:29:22.448: INFO: namespace e2e-tests-emptydir-wrapper-j2hjr deletion completed in 8.286581845s

• [SLOW TEST:526.855 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:29:22.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 20 12:29:50.792: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:29:50.848: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:29:52.848: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:29:52.886: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:29:54.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:29:54.903: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:29:56.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:29:56.876: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:29:58.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:29:58.877: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:30:00.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:30:00.888: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:30:02.848: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:30:02.875: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:30:04.848: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:30:04.888: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:30:06.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:30:06.891: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:30:08.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:30:08.899: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:30:10.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:30:10.990: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 20 12:30:12.848: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 20 12:30:12.869: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:30:12.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-kdb74" for this suite.
Dec 20 12:30:37.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:30:37.189: INFO: namespace: e2e-tests-container-lifecycle-hook-kdb74, resource: bindings, ignored listing per whitelist
Dec 20 12:30:37.291: INFO: namespace e2e-tests-container-lifecycle-hook-kdb74 deletion completed in 24.35273275s

• [SLOW TEST:74.842 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:30:37.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 20 12:30:37.448: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 20 12:30:37.455: INFO: Waiting for terminating namespaces to be deleted...
Dec 20 12:30:37.458: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 20 12:30:37.472: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 20 12:30:37.472: INFO: 	Container coredns ready: true, restart count 0
Dec 20 12:30:37.472: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 20 12:30:37.472: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 20 12:30:37.472: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 12:30:37.472: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 20 12:30:37.472: INFO: 	Container weave ready: true, restart count 0
Dec 20 12:30:37.472: INFO: 	Container weave-npc ready: true, restart count 0
Dec 20 12:30:37.472: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 20 12:30:37.472: INFO: 	Container coredns ready: true, restart count 0
Dec 20 12:30:37.472: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 12:30:37.472: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 12:30:37.472: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 20 12:30:37.570: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86ceaeb0-2324-11ea-851f-0242ac110004.15e214350c01fb8f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-v6wk9/filler-pod-86ceaeb0-2324-11ea-851f-0242ac110004 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86ceaeb0-2324-11ea-851f-0242ac110004.15e2143642cc8602], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86ceaeb0-2324-11ea-851f-0242ac110004.15e21436d445d06b], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86ceaeb0-2324-11ea-851f-0242ac110004.15e2143704a93eb0], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e21437626f62c3], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:30:48.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-v6wk9" for this suite.
Dec 20 12:30:57.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:30:57.315: INFO: namespace: e2e-tests-sched-pred-v6wk9, resource: bindings, ignored listing per whitelist
Dec 20 12:30:57.352: INFO: namespace e2e-tests-sched-pred-v6wk9 deletion completed in 8.262692505s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.060 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:30:57.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:31:10.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ms6nm" for this suite.
Dec 20 12:31:16.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:31:16.628: INFO: namespace: e2e-tests-kubelet-test-ms6nm, resource: bindings, ignored listing per whitelist
Dec 20 12:31:16.657: INFO: namespace e2e-tests-kubelet-test-ms6nm deletion completed in 6.156392444s

• [SLOW TEST:19.305 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:31:16.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 20 12:31:16.993: INFO: Waiting up to 5m0s for pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-fjmx9" to be "success or failure"
Dec 20 12:31:17.025: INFO: Pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.358364ms
Dec 20 12:31:19.510: INFO: Pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516312153s
Dec 20 12:31:21.535: INFO: Pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541899547s
Dec 20 12:31:23.557: INFO: Pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.563843839s
Dec 20 12:31:25.864: INFO: Pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.870181294s
Dec 20 12:31:27.881: INFO: Pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.8876904s
STEP: Saw pod success
Dec 20 12:31:27.881: INFO: Pod "pod-9e4a4c6b-2324-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:31:27.895: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9e4a4c6b-2324-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 12:31:28.806: INFO: Waiting for pod pod-9e4a4c6b-2324-11ea-851f-0242ac110004 to disappear
Dec 20 12:31:28.825: INFO: Pod pod-9e4a4c6b-2324-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:31:28.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fjmx9" for this suite.
Dec 20 12:31:35.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:31:35.095: INFO: namespace: e2e-tests-emptydir-fjmx9, resource: bindings, ignored listing per whitelist
Dec 20 12:31:35.299: INFO: namespace e2e-tests-emptydir-fjmx9 deletion completed in 6.450280016s

• [SLOW TEST:18.641 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:31:35.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:31:35.524: INFO: Creating deployment "nginx-deployment"
Dec 20 12:31:35.535: INFO: Waiting for observed generation 1
Dec 20 12:31:37.570: INFO: Waiting for all required pods to come up
Dec 20 12:31:37.584: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 20 12:32:20.846: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 20 12:32:20.866: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 20 12:32:20.882: INFO: Updating deployment nginx-deployment
Dec 20 12:32:20.882: INFO: Waiting for observed generation 2
Dec 20 12:32:24.910: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 20 12:32:24.932: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 20 12:32:25.187: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 20 12:32:25.971: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 20 12:32:25.971: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 20 12:32:25.976: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 20 12:32:27.044: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 20 12:32:27.044: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 20 12:32:27.795: INFO: Updating deployment nginx-deployment
Dec 20 12:32:27.795: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 20 12:32:28.654: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 20 12:32:30.759: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 20 12:32:32.692: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tgvg5/deployments/nginx-deployment,UID:a95a7b26-2324-11ea-a994-fa163e34d433,ResourceVersion:15456838,Generation:3,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-20 12:32:21 +0000 UTC 2019-12-20 12:31:35 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-20 12:32:29 +0000 UTC 2019-12-20 12:32:29 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 20 12:32:32.914: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tgvg5/replicasets/nginx-deployment-5c98f8fb5,UID:c464a085-2324-11ea-a994-fa163e34d433,ResourceVersion:15456833,Generation:3,CreationTimestamp:2019-12-20 12:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a95a7b26-2324-11ea-a994-fa163e34d433 0xc0011c89f7 0xc0011c89f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 12:32:32.914: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 20 12:32:32.915: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tgvg5/replicasets/nginx-deployment-85ddf47c5d,UID:a95e2b5b-2324-11ea-a994-fa163e34d433,ResourceVersion:15456883,Generation:3,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a95a7b26-2324-11ea-a994-fa163e34d433 0xc0011c8ab7 0xc0011c8ab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 20 12:32:33.052: INFO: Pod "nginx-deployment-5c98f8fb5-2t7br" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2t7br,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-2t7br,UID:c4b5ef20-2324-11ea-a994-fa163e34d433,ResourceVersion:15456823,Generation:0,CreationTimestamp:2019-12-20 12:32:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a26be7 0xc000a26be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a26cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a26ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-20 12:32:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.053: INFO: Pod "nginx-deployment-5c98f8fb5-c7b74" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c7b74,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-c7b74,UID:cabf6a34-2324-11ea-a994-fa163e34d433,ResourceVersion:15456875,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a26da7 0xc000a26da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a26e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a26e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.054: INFO: Pod "nginx-deployment-5c98f8fb5-cm2ts" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cm2ts,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-cm2ts,UID:c4beee3a-2324-11ea-a994-fa163e34d433,ResourceVersion:15456832,Generation:0,CreationTimestamp:2019-12-20 12:32:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a26ea7 0xc000a26ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a26f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a26f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-20 12:32:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.055: INFO: Pod "nginx-deployment-5c98f8fb5-kjvcv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kjvcv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-kjvcv,UID:ca4af861-2324-11ea-a994-fa163e34d433,ResourceVersion:15456856,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a26ff7 0xc000a26ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a270f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a27110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.055: INFO: Pod "nginx-deployment-5c98f8fb5-p5w4q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p5w4q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-p5w4q,UID:ca4c1893-2324-11ea-a994-fa163e34d433,ResourceVersion:15456866,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27187 0xc000a27188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a271f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a27210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.056: INFO: Pod "nginx-deployment-5c98f8fb5-qqfcd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qqfcd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-qqfcd,UID:c4777b7f-2324-11ea-a994-fa163e34d433,ResourceVersion:15456825,Generation:0,CreationTimestamp:2019-12-20 12:32:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27377 0xc000a27378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a27400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a27440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-20 12:32:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.056: INFO: Pod "nginx-deployment-5c98f8fb5-rwnc7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rwnc7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-rwnc7,UID:c46ad01b-2324-11ea-a994-fa163e34d433,ResourceVersion:15456819,Generation:0,CreationTimestamp:2019-12-20 12:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27527 0xc000a27528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a27590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a27620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-20 12:32:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.056: INFO: Pod "nginx-deployment-5c98f8fb5-smkmb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-smkmb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-smkmb,UID:cabf2a31-2324-11ea-a994-fa163e34d433,ResourceVersion:15456874,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27797 0xc000a27798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a27880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a278a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.057: INFO: Pod "nginx-deployment-5c98f8fb5-tdf7t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tdf7t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-tdf7t,UID:cb39d576-2324-11ea-a994-fa163e34d433,ResourceVersion:15456888,Generation:0,CreationTimestamp:2019-12-20 12:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27937 0xc000a27938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a279a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a279c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.057: INFO: Pod "nginx-deployment-5c98f8fb5-vssk5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vssk5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-vssk5,UID:cabf5f13-2324-11ea-a994-fa163e34d433,ResourceVersion:15456877,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27a67 0xc000a27a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a27af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a27b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.058: INFO: Pod "nginx-deployment-5c98f8fb5-xpw4d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xpw4d,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-xpw4d,UID:ca456a80-2324-11ea-a994-fa163e34d433,ResourceVersion:15456853,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27ba7 0xc000a27ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a27c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a27cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.058: INFO: Pod "nginx-deployment-5c98f8fb5-z5np5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z5np5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-z5np5,UID:c477dc05-2324-11ea-a994-fa163e34d433,ResourceVersion:15456822,Generation:0,CreationTimestamp:2019-12-20 12:32:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000a27e17 0xc000a27e18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a27ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a27ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-20 12:32:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.059: INFO: Pod "nginx-deployment-5c98f8fb5-zb22b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zb22b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-5c98f8fb5-zb22b,UID:cabf48c5-2324-11ea-a994-fa163e34d433,ResourceVersion:15456878,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c464a085-2324-11ea-a994-fa163e34d433 0xc000c52037 0xc000c52038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c520a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c520c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.059: INFO: Pod "nginx-deployment-85ddf47c5d-27m92" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-27m92,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-27m92,UID:ca4bbc67-2324-11ea-a994-fa163e34d433,ResourceVersion:15456858,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c52137 0xc000c52138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c521f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c52210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.060: INFO: Pod "nginx-deployment-85ddf47c5d-2hfzb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2hfzb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-2hfzb,UID:ca4d36ca-2324-11ea-a994-fa163e34d433,ResourceVersion:15456871,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c522f7 0xc000c522f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c52370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c52390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.060: INFO: Pod "nginx-deployment-85ddf47c5d-2rtdz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2rtdz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-2rtdz,UID:cac06517-2324-11ea-a994-fa163e34d433,ResourceVersion:15456882,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c52407 0xc000c52408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c52470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c52490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.061: INFO: Pod "nginx-deployment-85ddf47c5d-625jq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-625jq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-625jq,UID:cac01c74-2324-11ea-a994-fa163e34d433,ResourceVersion:15456881,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c525a7 0xc000c525a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c52670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c52690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.061: INFO: Pod "nginx-deployment-85ddf47c5d-99qvp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-99qvp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-99qvp,UID:ca4567b7-2324-11ea-a994-fa163e34d433,ResourceVersion:15456855,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c52707 0xc000c52708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c52770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c52790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.062: INFO: Pod "nginx-deployment-85ddf47c5d-9r5x6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9r5x6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-9r5x6,UID:cac08b2b-2324-11ea-a994-fa163e34d433,ResourceVersion:15456880,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c52aa7 0xc000c52aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c52dc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c52fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.063: INFO: Pod "nginx-deployment-85ddf47c5d-9z46l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9z46l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-9z46l,UID:a979af3e-2324-11ea-a994-fa163e34d433,ResourceVersion:15456742,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c533d7 0xc000c533d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c53530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c537c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-20 12:31:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2b17373aece5a4b549b04d639042a3046536c1105d446ded4264b8aec8595af0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.063: INFO: Pod "nginx-deployment-85ddf47c5d-bqfmb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqfmb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-bqfmb,UID:a97a49ea-2324-11ea-a994-fa163e34d433,ResourceVersion:15456747,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c53977 0xc000c53978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c53a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c53a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-20 12:31:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:03 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a7c920ff274baedbf79cb5f60be19bd695899f393194bf397591a11770535f8f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.064: INFO: Pod "nginx-deployment-85ddf47c5d-brqw7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-brqw7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-brqw7,UID:a97af15a-2324-11ea-a994-fa163e34d433,ResourceVersion:15456738,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000c53ec7 0xc000c53ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c53f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c53f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-20 12:31:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4c56c8c7421813078f8152649b850e300a70817b7e5439c73832da2b4bcd36bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.064: INFO: Pod "nginx-deployment-85ddf47c5d-dkhzh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dkhzh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-dkhzh,UID:ca4c3998-2324-11ea-a994-fa163e34d433,ResourceVersion:15456857,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00147a177 0xc00147a178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00147a1e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00147a210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.064: INFO: Pod "nginx-deployment-85ddf47c5d-gxddh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gxddh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-gxddh,UID:ca4c17df-2324-11ea-a994-fa163e34d433,ResourceVersion:15456864,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00147a467 0xc00147a468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00147a4d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00147a4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:31 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.065: INFO: Pod "nginx-deployment-85ddf47c5d-jqmkz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jqmkz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-jqmkz,UID:a99846dd-2324-11ea-a994-fa163e34d433,ResourceVersion:15456749,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00147a567 0xc00147a568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00147a6e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00147a700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-20 12:31:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3f5adda247113effcfe0e910e75b38d01a5d850e8f54d37facc6ee7c2058e1a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.066: INFO: Pod "nginx-deployment-85ddf47c5d-p48rh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p48rh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-p48rh,UID:ca453e18-2324-11ea-a994-fa163e34d433,ResourceVersion:15456854,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00147ad77 0xc00147ad78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00147ade0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00147ae00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:30 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.066: INFO: Pod "nginx-deployment-85ddf47c5d-r6tm9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r6tm9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-r6tm9,UID:a97aa0c4-2324-11ea-a994-fa163e34d433,ResourceVersion:15456758,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00147ae77 0xc00147ae78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00147bc60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00147bc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-20 12:31:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://972b265a5e93a3d5b017d2db8976e279c313b20af35cd2ff6f9ffe6f2723f99d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.067: INFO: Pod "nginx-deployment-85ddf47c5d-rdgfl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rdgfl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-rdgfl,UID:cac0955b-2324-11ea-a994-fa163e34d433,ResourceVersion:15456876,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00147bf67 0xc00147bf68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bf0cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bf0ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.067: INFO: Pod "nginx-deployment-85ddf47c5d-shmkt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-shmkt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-shmkt,UID:a998c7d7-2324-11ea-a994-fa163e34d433,ResourceVersion:15456755,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc000bf0dc7 0xc000bf0dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00101c2d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00101c2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-20 12:31:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d22255fd5ee2a6263965840d01c9021b6c57a6e85d2ad3228098363da0585045}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.068: INFO: Pod "nginx-deployment-85ddf47c5d-skwjg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-skwjg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-skwjg,UID:a96e1d91-2324-11ea-a994-fa163e34d433,ResourceVersion:15456733,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00101c3b7 0xc00101c3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00101c4f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00101c510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-20 12:31:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d9cbb9c9018612f43d56bea74d1c9b9548ff89c8b8ac9bd20ff5fa46baaacab6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.068: INFO: Pod "nginx-deployment-85ddf47c5d-wgrxp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wgrxp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-wgrxp,UID:cac068c3-2324-11ea-a994-fa163e34d433,ResourceVersion:15456879,Generation:0,CreationTimestamp:2019-12-20 12:32:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00101c6b7 0xc00101c6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00101c720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00101c740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.068: INFO: Pod "nginx-deployment-85ddf47c5d-xfmjn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xfmjn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-xfmjn,UID:ca2722ee-2324-11ea-a994-fa163e34d433,ResourceVersion:15456892,Generation:0,CreationTimestamp:2019-12-20 12:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00101c7b7 0xc00101c7b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00101c960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00101c980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-20 12:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 20 12:32:33.069: INFO: Pod "nginx-deployment-85ddf47c5d-zwqkz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zwqkz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-tgvg5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tgvg5/pods/nginx-deployment-85ddf47c5d-zwqkz,UID:a971125c-2324-11ea-a994-fa163e34d433,ResourceVersion:15456772,Generation:0,CreationTimestamp:2019-12-20 12:31:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a95e2b5b-2324-11ea-a994-fa163e34d433 0xc00101caa7 0xc00101caa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ftr99 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ftr99,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ftr99 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00101cb10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00101cb30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:32:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:31:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-20 12:31:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:32:15 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://333b00edadc1ebeeb137bca1461b2b38d28315ba085c93e1aca381b0b0015d23}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:32:33.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-tgvg5" for this suite.
Dec 20 12:33:27.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:33:27.583: INFO: namespace: e2e-tests-deployment-tgvg5, resource: bindings, ignored listing per whitelist
Dec 20 12:33:27.716: INFO: namespace e2e-tests-deployment-tgvg5 deletion completed in 54.227340018s

• [SLOW TEST:112.418 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:33:27.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-d72ns/secret-test-ecdaa45a-2324-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 12:33:29.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-d72ns" to be "success or failure"
Dec 20 12:33:29.569: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 496.962323ms
Dec 20 12:33:33.339: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266534774s
Dec 20 12:33:36.108: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.035145479s
Dec 20 12:33:38.555: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.482739471s
Dec 20 12:33:40.580: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.507322704s
Dec 20 12:33:42.630: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.557195287s
Dec 20 12:33:44.637: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.564248908s
Dec 20 12:33:47.183: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.110064181s
Dec 20 12:33:49.281: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.208975222s
Dec 20 12:33:51.373: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.300802312s
Dec 20 12:33:53.390: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.317478295s
STEP: Saw pod success
Dec 20 12:33:53.390: INFO: Pod "pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:33:53.397: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004 container env-test: 
STEP: delete the pod
Dec 20 12:33:54.995: INFO: Waiting for pod pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004 to disappear
Dec 20 12:33:55.006: INFO: Pod pod-configmaps-ece63e5f-2324-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:33:55.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d72ns" for this suite.
Dec 20 12:34:01.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:34:01.306: INFO: namespace: e2e-tests-secrets-d72ns, resource: bindings, ignored listing per whitelist
Dec 20 12:34:01.420: INFO: namespace e2e-tests-secrets-d72ns deletion completed in 6.400237927s

• [SLOW TEST:33.702 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:34:01.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-007a6386-2325-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 12:34:01.837: INFO: Waiting up to 5m0s for pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-xx68f" to be "success or failure"
Dec 20 12:34:02.023: INFO: Pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 185.773065ms
Dec 20 12:34:04.283: INFO: Pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.445606091s
Dec 20 12:34:06.301: INFO: Pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464294612s
Dec 20 12:34:08.877: INFO: Pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.040073411s
Dec 20 12:34:10.897: INFO: Pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060213908s
Dec 20 12:34:12.927: INFO: Pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.090319593s
STEP: Saw pod success
Dec 20 12:34:12.928: INFO: Pod "pod-secrets-0088c604-2325-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:34:12.953: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0088c604-2325-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 12:34:13.117: INFO: Waiting for pod pod-secrets-0088c604-2325-11ea-851f-0242ac110004 to disappear
Dec 20 12:34:13.159: INFO: Pod pod-secrets-0088c604-2325-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:34:13.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xx68f" for this suite.
Dec 20 12:34:19.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:34:19.560: INFO: namespace: e2e-tests-secrets-xx68f, resource: bindings, ignored listing per whitelist
Dec 20 12:34:19.562: INFO: namespace e2e-tests-secrets-xx68f deletion completed in 6.380156027s

• [SLOW TEST:18.142 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:34:19.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-0b432607-2325-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 12:34:19.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-lpgk5" to be "success or failure"
Dec 20 12:34:19.840: INFO: Pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.957756ms
Dec 20 12:34:21.995: INFO: Pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161467031s
Dec 20 12:34:24.065: INFO: Pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231939223s
Dec 20 12:34:26.228: INFO: Pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394735826s
Dec 20 12:34:28.311: INFO: Pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477240172s
Dec 20 12:34:30.332: INFO: Pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.498613856s
STEP: Saw pod success
Dec 20 12:34:30.332: INFO: Pod "pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:34:30.344: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 20 12:34:31.022: INFO: Waiting for pod pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004 to disappear
Dec 20 12:34:31.030: INFO: Pod pod-configmaps-0b44a108-2325-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:34:31.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lpgk5" for this suite.
Dec 20 12:34:37.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:34:37.164: INFO: namespace: e2e-tests-configmap-lpgk5, resource: bindings, ignored listing per whitelist
Dec 20 12:34:37.372: INFO: namespace e2e-tests-configmap-lpgk5 deletion completed in 6.330728875s

• [SLOW TEST:17.809 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:34:37.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:34:47.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-sbd6v" for this suite.
Dec 20 12:35:35.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:35:35.856: INFO: namespace: e2e-tests-kubelet-test-sbd6v, resource: bindings, ignored listing per whitelist
Dec 20 12:35:35.966: INFO: namespace e2e-tests-kubelet-test-sbd6v deletion completed in 48.276578828s

• [SLOW TEST:58.594 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:35:35.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:35:36.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:35:50.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pbvcv" for this suite.
Dec 20 12:36:32.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:36:32.466: INFO: namespace: e2e-tests-pods-pbvcv, resource: bindings, ignored listing per whitelist
Dec 20 12:36:32.755: INFO: namespace e2e-tests-pods-pbvcv deletion completed in 42.400933233s

• [SLOW TEST:56.788 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:36:32.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-5ab33ec7-2325-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 12:36:33.093: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-2djqd" to be "success or failure"
Dec 20 12:36:33.224: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 130.66767ms
Dec 20 12:36:35.237: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143350786s
Dec 20 12:36:37.257: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164261567s
Dec 20 12:36:39.575: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481611696s
Dec 20 12:36:41.595: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501669721s
Dec 20 12:36:43.643: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.549750694s
Dec 20 12:36:46.007: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.914195056s
STEP: Saw pod success
Dec 20 12:36:46.008: INFO: Pod "pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:36:46.037: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 12:36:46.519: INFO: Waiting for pod pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004 to disappear
Dec 20 12:36:46.677: INFO: Pod pod-projected-secrets-5ab4713b-2325-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:36:46.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2djqd" for this suite.
Dec 20 12:36:54.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:36:54.865: INFO: namespace: e2e-tests-projected-2djqd, resource: bindings, ignored listing per whitelist
Dec 20 12:36:54.947: INFO: namespace e2e-tests-projected-2djqd deletion completed in 8.253410411s

• [SLOW TEST:22.192 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:36:54.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 20 12:37:04.262: INFO: 10 pods remaining
Dec 20 12:37:04.262: INFO: 10 pods has nil DeletionTimestamp
Dec 20 12:37:04.262: INFO: 
Dec 20 12:37:05.812: INFO: 10 pods remaining
Dec 20 12:37:05.812: INFO: 10 pods has nil DeletionTimestamp
Dec 20 12:37:05.812: INFO: 
Dec 20 12:37:06.650: INFO: 10 pods remaining
Dec 20 12:37:06.650: INFO: 7 pods has nil DeletionTimestamp
Dec 20 12:37:06.650: INFO: 
Dec 20 12:37:07.221: INFO: 0 pods remaining
Dec 20 12:37:07.221: INFO: 0 pods has nil DeletionTimestamp
Dec 20 12:37:07.221: INFO: 
STEP: Gathering metrics
W1220 12:37:07.908900       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 12:37:07.909: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:37:07.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5hv5j" for this suite.
Dec 20 12:37:24.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:37:24.194: INFO: namespace: e2e-tests-gc-5hv5j, resource: bindings, ignored listing per whitelist
Dec 20 12:37:24.243: INFO: namespace e2e-tests-gc-5hv5j deletion completed in 16.311923836s

• [SLOW TEST:29.296 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:37:24.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 20 12:37:24.615: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 20 12:37:24.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:37:27.184: INFO: stderr: ""
Dec 20 12:37:27.184: INFO: stdout: "service/redis-slave created\n"
Dec 20 12:37:27.185: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 20 12:37:27.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:37:27.739: INFO: stderr: ""
Dec 20 12:37:27.739: INFO: stdout: "service/redis-master created\n"
Dec 20 12:37:27.741: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 20 12:37:27.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:37:28.240: INFO: stderr: ""
Dec 20 12:37:28.241: INFO: stdout: "service/frontend created\n"
Dec 20 12:37:28.242: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 20 12:37:28.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:37:28.600: INFO: stderr: ""
Dec 20 12:37:28.601: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 20 12:37:28.601: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 20 12:37:28.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:37:28.990: INFO: stderr: ""
Dec 20 12:37:28.990: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 20 12:37:28.991: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 20 12:37:28.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:37:29.454: INFO: stderr: ""
Dec 20 12:37:29.455: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 20 12:37:29.455: INFO: Waiting for all frontend pods to be Running.
Dec 20 12:37:59.508: INFO: Waiting for frontend to serve content.
Dec 20 12:38:01.879: INFO: Trying to add a new entry to the guestbook.
Dec 20 12:38:01.972: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 20 12:38:02.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:38:02.385: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 12:38:02.385: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 20 12:38:02.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:38:03.025: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 12:38:03.025: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 20 12:38:03.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:38:03.200: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 12:38:03.200: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 20 12:38:03.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:38:03.329: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 12:38:03.329: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 20 12:38:03.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:38:03.790: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 12:38:03.790: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 20 12:38:03.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-44wxq'
Dec 20 12:38:04.138: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 12:38:04.138: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:38:04.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-44wxq" for this suite.
Dec 20 12:38:56.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:38:56.466: INFO: namespace: e2e-tests-kubectl-44wxq, resource: bindings, ignored listing per whitelist
Dec 20 12:38:56.586: INFO: namespace e2e-tests-kubectl-44wxq deletion completed in 52.424390717s

• [SLOW TEST:92.342 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:38:56.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b05a97aa-2325-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 12:38:56.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-dtv47" to be "success or failure"
Dec 20 12:38:56.899: INFO: Pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 105.646561ms
Dec 20 12:38:58.974: INFO: Pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180565876s
Dec 20 12:39:00.996: INFO: Pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202692021s
Dec 20 12:39:03.389: INFO: Pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.595127396s
Dec 20 12:39:05.429: INFO: Pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635446025s
Dec 20 12:39:07.455: INFO: Pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.660760517s
STEP: Saw pod success
Dec 20 12:39:07.455: INFO: Pod "pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:39:07.464: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 20 12:39:07.665: INFO: Waiting for pod pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004 to disappear
Dec 20 12:39:07.679: INFO: Pod pod-configmaps-b05b7729-2325-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:39:07.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dtv47" for this suite.
Dec 20 12:39:14.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:39:14.905: INFO: namespace: e2e-tests-configmap-dtv47, resource: bindings, ignored listing per whitelist
Dec 20 12:39:14.919: INFO: namespace e2e-tests-configmap-dtv47 deletion completed in 7.22865449s

• [SLOW TEST:18.333 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:39:14.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-s9vtn in namespace e2e-tests-proxy-g22d7
I1220 12:39:15.321997       8 runners.go:184] Created replication controller with name: proxy-service-s9vtn, namespace: e2e-tests-proxy-g22d7, replica count: 1
I1220 12:39:16.373079       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:17.373752       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:18.374408       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:19.375652       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:20.376534       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:21.377168       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:22.378247       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:23.379784       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:24.380571       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:39:25.381404       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1220 12:39:26.381937       8 runners.go:184] proxy-service-s9vtn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 20 12:39:26.428: INFO: setup took 11.273331915s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 20 12:39:26.520: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-g22d7/pods/http:proxy-service-s9vtn-bk49c:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 20 12:39:42.024: INFO: Waiting up to 5m0s for pod "client-containers-cb522547-2325-11ea-851f-0242ac110004" in namespace "e2e-tests-containers-fppgr" to be "success or failure"
Dec 20 12:39:42.049: INFO: Pod "client-containers-cb522547-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.701586ms
Dec 20 12:39:44.259: INFO: Pod "client-containers-cb522547-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23400035s
Dec 20 12:39:46.273: INFO: Pod "client-containers-cb522547-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248717838s
Dec 20 12:39:48.325: INFO: Pod "client-containers-cb522547-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300062505s
Dec 20 12:39:50.703: INFO: Pod "client-containers-cb522547-2325-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.678193916s
Dec 20 12:39:52.718: INFO: Pod "client-containers-cb522547-2325-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.693782673s
STEP: Saw pod success
Dec 20 12:39:52.718: INFO: Pod "client-containers-cb522547-2325-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:39:52.724: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-cb522547-2325-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 12:39:53.582: INFO: Waiting for pod client-containers-cb522547-2325-11ea-851f-0242ac110004 to disappear
Dec 20 12:39:53.591: INFO: Pod client-containers-cb522547-2325-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:39:53.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-fppgr" for this suite.
Dec 20 12:39:59.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:39:59.691: INFO: namespace: e2e-tests-containers-fppgr, resource: bindings, ignored listing per whitelist
Dec 20 12:39:59.898: INFO: namespace e2e-tests-containers-fppgr deletion completed in 6.299725491s

• [SLOW TEST:18.099 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:39:59.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1220 12:40:03.893666       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 20 12:40:03.893: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:40:03.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vv9m5" for this suite.
Dec 20 12:40:09.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:40:10.073: INFO: namespace: e2e-tests-gc-vv9m5, resource: bindings, ignored listing per whitelist
Dec 20 12:40:10.183: INFO: namespace e2e-tests-gc-vv9m5 deletion completed in 6.283949136s

• [SLOW TEST:10.283 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:40:10.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 20 12:40:10.586: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 20 12:40:10.667: INFO: Waiting for terminating namespaces to be deleted...
Dec 20 12:40:10.694: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 20 12:40:10.752: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 12:40:10.752: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 20 12:40:10.752: INFO: 	Container weave ready: true, restart count 0
Dec 20 12:40:10.752: INFO: 	Container weave-npc ready: true, restart count 0
Dec 20 12:40:10.752: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 20 12:40:10.752: INFO: 	Container coredns ready: true, restart count 0
Dec 20 12:40:10.752: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 12:40:10.752: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 12:40:10.753: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 20 12:40:10.753: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 20 12:40:10.753: INFO: 	Container coredns ready: true, restart count 0
Dec 20 12:40:10.753: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 20 12:40:10.753: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e4f1bf52-2325-11ea-851f-0242ac110004 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-e4f1bf52-2325-11ea-851f-0242ac110004 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e4f1bf52-2325-11ea-851f-0242ac110004
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:40:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-9m9mk" for this suite.
Dec 20 12:40:51.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:40:51.689: INFO: namespace: e2e-tests-sched-pred-9m9mk, resource: bindings, ignored listing per whitelist
Dec 20 12:40:51.827: INFO: namespace e2e-tests-sched-pred-9m9mk deletion completed in 14.425418s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:41.643 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:40:51.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-f51df592-2325-11ea-851f-0242ac110004
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-f51df592-2325-11ea-851f-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:41:04.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-r6w69" for this suite.
Dec 20 12:41:28.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:41:28.623: INFO: namespace: e2e-tests-configmap-r6w69, resource: bindings, ignored listing per whitelist
Dec 20 12:41:28.676: INFO: namespace e2e-tests-configmap-r6w69 deletion completed in 24.257667482s

• [SLOW TEST:36.848 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:41:28.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 12:41:28.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-wmh2v" to be "success or failure"
Dec 20 12:41:28.990: INFO: Pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.175168ms
Dec 20 12:41:31.014: INFO: Pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035344225s
Dec 20 12:41:33.040: INFO: Pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061950224s
Dec 20 12:41:35.059: INFO: Pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080160384s
Dec 20 12:41:37.201: INFO: Pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222631483s
Dec 20 12:41:39.217: INFO: Pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.238270835s
STEP: Saw pod success
Dec 20 12:41:39.217: INFO: Pod "downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:41:39.226: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 12:41:40.048: INFO: Waiting for pod downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:41:40.072: INFO: Pod downwardapi-volume-0b10a283-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:41:40.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wmh2v" for this suite.
Dec 20 12:41:46.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:41:46.180: INFO: namespace: e2e-tests-downward-api-wmh2v, resource: bindings, ignored listing per whitelist
Dec 20 12:41:46.294: INFO: namespace e2e-tests-downward-api-wmh2v deletion completed in 6.214005179s

• [SLOW TEST:17.618 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:41:46.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:41:46.513: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 20 12:41:51.602: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 20 12:41:55.627: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 20 12:41:57.644: INFO: Creating deployment "test-rollover-deployment"
Dec 20 12:41:57.769: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 20 12:41:59.793: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 20 12:41:59.811: INFO: Ensure that both replica sets have 1 created replica
Dec 20 12:42:00.074: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 20 12:42:00.104: INFO: Updating deployment test-rollover-deployment
Dec 20 12:42:00.104: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 20 12:42:02.278: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 20 12:42:02.731: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 20 12:42:02.749: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:02.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442521, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:04.777: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:04.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442521, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:06.794: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:06.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442521, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:08.777: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:08.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442521, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:10.778: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:10.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442521, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:12.851: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:12.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442531, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:14.783: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:14.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442531, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:16.776: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:16.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442531, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:18.777: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:18.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442531, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:20.772: INFO: all replica sets need to contain the pod-template-hash label
Dec 20 12:42:20.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442531, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712442517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:42:23.044: INFO: 
Dec 20 12:42:23.044: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 20 12:42:23.422: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-9bj9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9bj9q/deployments/test-rollover-deployment,UID:1c2aa85e-2326-11ea-a994-fa163e34d433,ResourceVersion:15458456,Generation:2,CreationTimestamp:2019-12-20 12:41:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-20 12:41:57 +0000 UTC 2019-12-20 12:41:57 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-20 12:42:21 +0000 UTC 2019-12-20 12:41:57 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 20 12:42:23.430: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-9bj9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9bj9q/replicasets/test-rollover-deployment-5b8479fdb6,UID:1da2dcf3-2326-11ea-a994-fa163e34d433,ResourceVersion:15458447,Generation:2,CreationTimestamp:2019-12-20 12:42:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1c2aa85e-2326-11ea-a994-fa163e34d433 0xc0020fb3b7 0xc0020fb3b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 20 12:42:23.430: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 20 12:42:23.430: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-9bj9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9bj9q/replicasets/test-rollover-controller,UID:157c86a3-2326-11ea-a994-fa163e34d433,ResourceVersion:15458455,Generation:2,CreationTimestamp:2019-12-20 12:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1c2aa85e-2326-11ea-a994-fa163e34d433 0xc0020fb17f 0xc0020fb1a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 12:42:23.431: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-9bj9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9bj9q/replicasets/test-rollover-deployment-58494b7559,UID:1c43a815-2326-11ea-a994-fa163e34d433,ResourceVersion:15458409,Generation:2,CreationTimestamp:2019-12-20 12:41:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1c2aa85e-2326-11ea-a994-fa163e34d433 0xc0020fb2e7 0xc0020fb2e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 12:42:23.438: INFO: Pod "test-rollover-deployment-5b8479fdb6-wbjj6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-wbjj6,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-9bj9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9bj9q/pods/test-rollover-deployment-5b8479fdb6-wbjj6,UID:1e279433-2326-11ea-a994-fa163e34d433,ResourceVersion:15458432,Generation:0,CreationTimestamp:2019-12-20 12:42:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 1da2dcf3-2326-11ea-a994-fa163e34d433 0xc001359447 0xc001359448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xrxhr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xrxhr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xrxhr true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001359770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001359790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:42:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:42:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:42:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:42:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-20 12:42:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-20 12:42:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d5468ae74e61ecb6bbd5c7ad7ef5f7004e65242cb3a4f9eb5ce28efc3715b266}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:42:23.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9bj9q" for this suite.
Dec 20 12:42:33.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:42:33.709: INFO: namespace: e2e-tests-deployment-9bj9q, resource: bindings, ignored listing per whitelist
Dec 20 12:42:33.946: INFO: namespace e2e-tests-deployment-9bj9q deletion completed in 10.499993251s

• [SLOW TEST:47.651 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:42:33.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 12:42:34.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-qvb28" to be "success or failure"
Dec 20 12:42:34.466: INFO: Pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 64.54178ms
Dec 20 12:42:36.772: INFO: Pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370173438s
Dec 20 12:42:38.789: INFO: Pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386806975s
Dec 20 12:42:40.835: INFO: Pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432887989s
Dec 20 12:42:42.849: INFO: Pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447167147s
Dec 20 12:42:44.876: INFO: Pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.473693459s
STEP: Saw pod success
Dec 20 12:42:44.876: INFO: Pod "downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:42:44.885: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 12:42:45.598: INFO: Waiting for pod downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:42:45.615: INFO: Pod downwardapi-volume-32019bf8-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:42:45.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qvb28" for this suite.
Dec 20 12:42:51.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:42:51.748: INFO: namespace: e2e-tests-projected-qvb28, resource: bindings, ignored listing per whitelist
Dec 20 12:42:51.917: INFO: namespace e2e-tests-projected-qvb28 deletion completed in 6.289895327s

• [SLOW TEST:17.970 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:42:51.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 20 12:42:52.249: INFO: Number of nodes with available pods: 0
Dec 20 12:42:52.250: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:42:53.485: INFO: Number of nodes with available pods: 0
Dec 20 12:42:53.485: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:42:54.694: INFO: Number of nodes with available pods: 0
Dec 20 12:42:54.694: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:42:55.485: INFO: Number of nodes with available pods: 0
Dec 20 12:42:55.485: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:42:56.367: INFO: Number of nodes with available pods: 0
Dec 20 12:42:56.368: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:42:57.266: INFO: Number of nodes with available pods: 0
Dec 20 12:42:57.266: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:42:58.269: INFO: Number of nodes with available pods: 0
Dec 20 12:42:58.269: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:42:59.928: INFO: Number of nodes with available pods: 0
Dec 20 12:42:59.928: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:43:00.743: INFO: Number of nodes with available pods: 0
Dec 20 12:43:00.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:43:01.266: INFO: Number of nodes with available pods: 0
Dec 20 12:43:01.266: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:43:02.271: INFO: Number of nodes with available pods: 1
Dec 20 12:43:02.271: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 20 12:43:02.311: INFO: Number of nodes with available pods: 1
Dec 20 12:43:02.311: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vcrf7, will wait for the garbage collector to delete the pods
Dec 20 12:43:03.406: INFO: Deleting DaemonSet.extensions daemon-set took: 12.09869ms
Dec 20 12:43:05.106: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.700629475s
Dec 20 12:43:09.165: INFO: Number of nodes with available pods: 0
Dec 20 12:43:09.165: INFO: Number of running nodes: 0, number of available pods: 0
Dec 20 12:43:09.183: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vcrf7/daemonsets","resourceVersion":"15458605"},"items":null}

Dec 20 12:43:09.186: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vcrf7/pods","resourceVersion":"15458605"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:43:09.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vcrf7" for this suite.
Dec 20 12:43:15.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:43:15.587: INFO: namespace: e2e-tests-daemonsets-vcrf7, resource: bindings, ignored listing per whitelist
Dec 20 12:43:15.604: INFO: namespace e2e-tests-daemonsets-vcrf7 deletion completed in 6.407636877s

• [SLOW TEST:23.686 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:43:15.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 20 12:43:15.740: INFO: Waiting up to 5m0s for pod "pod-4ab168ad-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-q5plp" to be "success or failure"
Dec 20 12:43:15.802: INFO: Pod "pod-4ab168ad-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 61.5911ms
Dec 20 12:43:18.265: INFO: Pod "pod-4ab168ad-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524470225s
Dec 20 12:43:20.275: INFO: Pod "pod-4ab168ad-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.535338632s
Dec 20 12:43:22.289: INFO: Pod "pod-4ab168ad-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548698232s
Dec 20 12:43:24.309: INFO: Pod "pod-4ab168ad-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.568586202s
STEP: Saw pod success
Dec 20 12:43:24.309: INFO: Pod "pod-4ab168ad-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:43:24.331: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4ab168ad-2326-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 12:43:24.413: INFO: Waiting for pod pod-4ab168ad-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:43:24.544: INFO: Pod pod-4ab168ad-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:43:24.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-q5plp" for this suite.
Dec 20 12:43:30.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:43:30.752: INFO: namespace: e2e-tests-emptydir-q5plp, resource: bindings, ignored listing per whitelist
Dec 20 12:43:30.801: INFO: namespace e2e-tests-emptydir-q5plp deletion completed in 6.232785187s

• [SLOW TEST:15.197 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:43:30.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-53c9e92c-2326-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 12:43:31.001: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-r59n8" to be "success or failure"
Dec 20 12:43:31.013: INFO: Pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.039533ms
Dec 20 12:43:33.024: INFO: Pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022446889s
Dec 20 12:43:35.043: INFO: Pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041617499s
Dec 20 12:43:37.077: INFO: Pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07535125s
Dec 20 12:43:39.556: INFO: Pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 8.55408834s
Dec 20 12:43:41.571: INFO: Pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569559724s
STEP: Saw pod success
Dec 20 12:43:41.571: INFO: Pod "pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:43:42.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 12:43:42.837: INFO: Waiting for pod pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:43:42.868: INFO: Pod pod-projected-configmaps-53cb3701-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:43:42.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r59n8" for this suite.
Dec 20 12:43:49.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:43:49.109: INFO: namespace: e2e-tests-projected-r59n8, resource: bindings, ignored listing per whitelist
Dec 20 12:43:49.160: INFO: namespace e2e-tests-projected-r59n8 deletion completed in 6.277126993s

• [SLOW TEST:18.358 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:43:49.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 20 12:43:49.376: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-t2nsh,SelfLink:/api/v1/namespaces/e2e-tests-watch-t2nsh/configmaps/e2e-watch-test-label-changed,UID:5ebc9f38-2326-11ea-a994-fa163e34d433,ResourceVersion:15458720,Generation:0,CreationTimestamp:2019-12-20 12:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 20 12:43:49.376: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-t2nsh,SelfLink:/api/v1/namespaces/e2e-tests-watch-t2nsh/configmaps/e2e-watch-test-label-changed,UID:5ebc9f38-2326-11ea-a994-fa163e34d433,ResourceVersion:15458721,Generation:0,CreationTimestamp:2019-12-20 12:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 20 12:43:49.376: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-t2nsh,SelfLink:/api/v1/namespaces/e2e-tests-watch-t2nsh/configmaps/e2e-watch-test-label-changed,UID:5ebc9f38-2326-11ea-a994-fa163e34d433,ResourceVersion:15458722,Generation:0,CreationTimestamp:2019-12-20 12:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 20 12:43:59.519: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-t2nsh,SelfLink:/api/v1/namespaces/e2e-tests-watch-t2nsh/configmaps/e2e-watch-test-label-changed,UID:5ebc9f38-2326-11ea-a994-fa163e34d433,ResourceVersion:15458736,Generation:0,CreationTimestamp:2019-12-20 12:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 20 12:43:59.520: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-t2nsh,SelfLink:/api/v1/namespaces/e2e-tests-watch-t2nsh/configmaps/e2e-watch-test-label-changed,UID:5ebc9f38-2326-11ea-a994-fa163e34d433,ResourceVersion:15458737,Generation:0,CreationTimestamp:2019-12-20 12:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 20 12:43:59.520: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-t2nsh,SelfLink:/api/v1/namespaces/e2e-tests-watch-t2nsh/configmaps/e2e-watch-test-label-changed,UID:5ebc9f38-2326-11ea-a994-fa163e34d433,ResourceVersion:15458738,Generation:0,CreationTimestamp:2019-12-20 12:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:43:59.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-t2nsh" for this suite.
Dec 20 12:44:05.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:44:05.711: INFO: namespace: e2e-tests-watch-t2nsh, resource: bindings, ignored listing per whitelist
Dec 20 12:44:05.735: INFO: namespace e2e-tests-watch-t2nsh deletion completed in 6.200010818s

• [SLOW TEST:16.575 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:44:05.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-lgpds/configmap-test-68a7bbaf-2326-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 12:44:06.061: INFO: Waiting up to 5m0s for pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-lgpds" to be "success or failure"
Dec 20 12:44:06.077: INFO: Pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.797861ms
Dec 20 12:44:08.182: INFO: Pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119990302s
Dec 20 12:44:10.196: INFO: Pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133999221s
Dec 20 12:44:12.355: INFO: Pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29353855s
Dec 20 12:44:14.380: INFO: Pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318146295s
Dec 20 12:44:16.391: INFO: Pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.32953598s
STEP: Saw pod success
Dec 20 12:44:16.391: INFO: Pod "pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:44:16.396: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004 container env-test: 
STEP: delete the pod
Dec 20 12:44:17.420: INFO: Waiting for pod pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:44:17.816: INFO: Pod pod-configmaps-68a8b0f4-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:44:17.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lgpds" for this suite.
Dec 20 12:44:24.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:44:24.402: INFO: namespace: e2e-tests-configmap-lgpds, resource: bindings, ignored listing per whitelist
Dec 20 12:44:24.542: INFO: namespace e2e-tests-configmap-lgpds deletion completed in 6.705296872s

• [SLOW TEST:18.806 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:44:24.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 20 12:44:24.772: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458802,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 20 12:44:24.772: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458802,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 20 12:44:34.804: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458815,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 20 12:44:34.805: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458815,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 20 12:44:44.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458828,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 20 12:44:44.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458828,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 20 12:44:54.895: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458840,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 20 12:44:54.895: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-a,UID:73da9148-2326-11ea-a994-fa163e34d433,ResourceVersion:15458840,Generation:0,CreationTimestamp:2019-12-20 12:44:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 20 12:45:04.931: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-b,UID:8bc7f891-2326-11ea-a994-fa163e34d433,ResourceVersion:15458853,Generation:0,CreationTimestamp:2019-12-20 12:45:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 20 12:45:04.931: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-b,UID:8bc7f891-2326-11ea-a994-fa163e34d433,ResourceVersion:15458853,Generation:0,CreationTimestamp:2019-12-20 12:45:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 20 12:45:14.959: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-b,UID:8bc7f891-2326-11ea-a994-fa163e34d433,ResourceVersion:15458866,Generation:0,CreationTimestamp:2019-12-20 12:45:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 20 12:45:14.959: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-rl75j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rl75j/configmaps/e2e-watch-test-configmap-b,UID:8bc7f891-2326-11ea-a994-fa163e34d433,ResourceVersion:15458866,Generation:0,CreationTimestamp:2019-12-20 12:45:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:45:24.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-rl75j" for this suite.
Dec 20 12:45:31.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:45:31.129: INFO: namespace: e2e-tests-watch-rl75j, resource: bindings, ignored listing per whitelist
Dec 20 12:45:31.144: INFO: namespace e2e-tests-watch-rl75j deletion completed in 6.174755303s

• [SLOW TEST:66.601 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:45:31.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 12:45:31.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-vbvtb" to be "success or failure"
Dec 20 12:45:31.458: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.352441ms
Dec 20 12:45:33.478: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040622378s
Dec 20 12:45:35.496: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057854943s
Dec 20 12:45:37.514: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075906287s
Dec 20 12:45:39.641: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203557515s
Dec 20 12:45:41.656: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.218252559s
Dec 20 12:45:43.678: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.240019287s
STEP: Saw pod success
Dec 20 12:45:43.678: INFO: Pod "downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:45:43.689: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 12:45:44.370: INFO: Waiting for pod downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:45:44.625: INFO: Pod downwardapi-volume-9b9505e9-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:45:44.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vbvtb" for this suite.
Dec 20 12:45:50.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:45:51.045: INFO: namespace: e2e-tests-projected-vbvtb, resource: bindings, ignored listing per whitelist
Dec 20 12:45:51.110: INFO: namespace e2e-tests-projected-vbvtb deletion completed in 6.467551687s

• [SLOW TEST:19.965 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:45:51.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-h8xh
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 12:45:51.346: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-h8xh" in namespace "e2e-tests-subpath-2w54x" to be "success or failure"
Dec 20 12:45:51.368: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 21.70335ms
Dec 20 12:45:53.800: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453930341s
Dec 20 12:45:55.830: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483471239s
Dec 20 12:45:57.873: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526392936s
Dec 20 12:46:00.121: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.774736543s
Dec 20 12:46:02.147: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.800760465s
Dec 20 12:46:04.167: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.820448764s
Dec 20 12:46:06.180: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.833781542s
Dec 20 12:46:08.196: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 16.849793645s
Dec 20 12:46:10.348: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 19.001989968s
Dec 20 12:46:12.429: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 21.082525378s
Dec 20 12:46:14.470: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 23.124056232s
Dec 20 12:46:16.500: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 25.153261102s
Dec 20 12:46:18.522: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 27.175953468s
Dec 20 12:46:20.546: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 29.199676529s
Dec 20 12:46:22.591: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 31.244219336s
Dec 20 12:46:24.811: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Running", Reason="", readiness=false. Elapsed: 33.464580975s
Dec 20 12:46:26.836: INFO: Pod "pod-subpath-test-secret-h8xh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.489152789s
STEP: Saw pod success
Dec 20 12:46:26.836: INFO: Pod "pod-subpath-test-secret-h8xh" satisfied condition "success or failure"
Dec 20 12:46:26.857: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-h8xh container test-container-subpath-secret-h8xh: 
STEP: delete the pod
Dec 20 12:46:27.144: INFO: Waiting for pod pod-subpath-test-secret-h8xh to disappear
Dec 20 12:46:27.157: INFO: Pod pod-subpath-test-secret-h8xh no longer exists
STEP: Deleting pod pod-subpath-test-secret-h8xh
Dec 20 12:46:27.157: INFO: Deleting pod "pod-subpath-test-secret-h8xh" in namespace "e2e-tests-subpath-2w54x"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:46:27.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-2w54x" for this suite.
Dec 20 12:46:35.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:46:35.385: INFO: namespace: e2e-tests-subpath-2w54x, resource: bindings, ignored listing per whitelist
Dec 20 12:46:35.464: INFO: namespace e2e-tests-subpath-2w54x deletion completed in 8.222391744s

• [SLOW TEST:44.353 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:46:35.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-scmt
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 12:46:36.000: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-scmt" in namespace "e2e-tests-subpath-6b5j8" to be "success or failure"
Dec 20 12:46:36.083: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 82.134005ms
Dec 20 12:46:38.097: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096125107s
Dec 20 12:46:40.114: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113018216s
Dec 20 12:46:42.452: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451117484s
Dec 20 12:46:44.764: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.763178133s
Dec 20 12:46:46.777: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.776145308s
Dec 20 12:46:48.789: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.788416172s
Dec 20 12:46:50.888: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.887678622s
Dec 20 12:46:52.908: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 16.907599874s
Dec 20 12:46:54.919: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 18.918967029s
Dec 20 12:46:56.935: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 20.934541962s
Dec 20 12:46:58.959: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 22.958937986s
Dec 20 12:47:00.978: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 24.977608371s
Dec 20 12:47:02.996: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 26.995016675s
Dec 20 12:47:05.019: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 29.018673637s
Dec 20 12:47:07.044: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 31.043216301s
Dec 20 12:47:09.058: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Running", Reason="", readiness=false. Elapsed: 33.05788591s
Dec 20 12:47:11.448: INFO: Pod "pod-subpath-test-projected-scmt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.447556462s
STEP: Saw pod success
Dec 20 12:47:11.448: INFO: Pod "pod-subpath-test-projected-scmt" satisfied condition "success or failure"
Dec 20 12:47:11.460: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-scmt container test-container-subpath-projected-scmt: 
STEP: delete the pod
Dec 20 12:47:11.633: INFO: Waiting for pod pod-subpath-test-projected-scmt to disappear
Dec 20 12:47:11.644: INFO: Pod pod-subpath-test-projected-scmt no longer exists
STEP: Deleting pod pod-subpath-test-projected-scmt
Dec 20 12:47:11.644: INFO: Deleting pod "pod-subpath-test-projected-scmt" in namespace "e2e-tests-subpath-6b5j8"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:47:11.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-6b5j8" for this suite.
Dec 20 12:47:19.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:47:19.943: INFO: namespace: e2e-tests-subpath-6b5j8, resource: bindings, ignored listing per whitelist
Dec 20 12:47:20.006: INFO: namespace e2e-tests-subpath-6b5j8 deletion completed in 8.347086572s

• [SLOW TEST:44.542 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:47:20.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 12:47:20.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-fsvrn" to be "success or failure"
Dec 20 12:47:20.333: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 103.047159ms
Dec 20 12:47:22.455: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22482602s
Dec 20 12:47:24.488: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257245843s
Dec 20 12:47:27.260: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.029297277s
Dec 20 12:47:29.273: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.042655589s
Dec 20 12:47:31.299: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.068661903s
Dec 20 12:47:33.312: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.08172916s
STEP: Saw pod success
Dec 20 12:47:33.312: INFO: Pod "downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:47:33.317: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 12:47:33.405: INFO: Waiting for pod downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:47:33.686: INFO: Pod downwardapi-volume-dc6bb09f-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:47:33.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fsvrn" for this suite.
Dec 20 12:47:39.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:47:40.051: INFO: namespace: e2e-tests-downward-api-fsvrn, resource: bindings, ignored listing per whitelist
Dec 20 12:47:40.205: INFO: namespace e2e-tests-downward-api-fsvrn deletion completed in 6.494877032s

• [SLOW TEST:20.198 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:47:40.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 12:47:40.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-c5pft" to be "success or failure"
Dec 20 12:47:40.532: INFO: Pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.249157ms
Dec 20 12:47:42.820: INFO: Pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316298397s
Dec 20 12:47:44.839: INFO: Pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334781896s
Dec 20 12:47:46.854: INFO: Pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.34985838s
Dec 20 12:47:49.289: INFO: Pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.785547648s
Dec 20 12:47:51.880: INFO: Pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.376049614s
STEP: Saw pod success
Dec 20 12:47:51.880: INFO: Pod "downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:47:51.903: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 12:47:52.636: INFO: Waiting for pod downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:47:52.737: INFO: Pod downwardapi-volume-e87ff60b-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:47:52.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-c5pft" for this suite.
Dec 20 12:47:58.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:47:58.932: INFO: namespace: e2e-tests-downward-api-c5pft, resource: bindings, ignored listing per whitelist
Dec 20 12:47:58.939: INFO: namespace e2e-tests-downward-api-c5pft deletion completed in 6.186342198s

• [SLOW TEST:18.734 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:47:58.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-f39df882-2326-11ea-851f-0242ac110004
STEP: Creating secret with name secret-projected-all-test-volume-f39df863-2326-11ea-851f-0242ac110004
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 20 12:47:59.224: INFO: Waiting up to 5m0s for pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-6grcz" to be "success or failure"
Dec 20 12:47:59.250: INFO: Pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 25.18089ms
Dec 20 12:48:01.329: INFO: Pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104325615s
Dec 20 12:48:03.349: INFO: Pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124166275s
Dec 20 12:48:05.993: INFO: Pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.768855285s
Dec 20 12:48:08.006: INFO: Pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.781988162s
Dec 20 12:48:10.023: INFO: Pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798929481s
STEP: Saw pod success
Dec 20 12:48:10.024: INFO: Pod "projected-volume-f39df500-2326-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:48:10.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-f39df500-2326-11ea-851f-0242ac110004 container projected-all-volume-test: 
STEP: delete the pod
Dec 20 12:48:10.427: INFO: Waiting for pod projected-volume-f39df500-2326-11ea-851f-0242ac110004 to disappear
Dec 20 12:48:10.738: INFO: Pod projected-volume-f39df500-2326-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:48:10.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6grcz" for this suite.
Dec 20 12:48:16.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:48:16.995: INFO: namespace: e2e-tests-projected-6grcz, resource: bindings, ignored listing per whitelist
Dec 20 12:48:17.011: INFO: namespace e2e-tests-projected-6grcz deletion completed in 6.242963814s

• [SLOW TEST:18.071 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:48:17.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:48:17.331: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 109.884097ms)
Dec 20 12:48:17.340: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.454704ms)
Dec 20 12:48:17.348: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.577226ms)
Dec 20 12:48:17.353: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.352714ms)
Dec 20 12:48:17.360: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.952584ms)
Dec 20 12:48:17.369: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.083574ms)
Dec 20 12:48:17.375: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.349646ms)
Dec 20 12:48:17.380: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.277662ms)
Dec 20 12:48:17.386: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.885239ms)
Dec 20 12:48:17.392: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.854134ms)
Dec 20 12:48:17.398: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.928011ms)
Dec 20 12:48:17.406: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.245923ms)
Dec 20 12:48:17.412: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.37484ms)
Dec 20 12:48:17.417: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.903193ms)
Dec 20 12:48:17.422: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.733142ms)
Dec 20 12:48:17.429: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.386828ms)
Dec 20 12:48:17.440: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.718513ms)
Dec 20 12:48:17.445: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.264755ms)
Dec 20 12:48:17.450: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.936063ms)
Dec 20 12:48:17.455: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.258981ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:48:17.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-b7scj" for this suite.
Dec 20 12:48:25.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:48:25.710: INFO: namespace: e2e-tests-proxy-b7scj, resource: bindings, ignored listing per whitelist
Dec 20 12:48:25.774: INFO: namespace e2e-tests-proxy-b7scj deletion completed in 8.313629221s

• [SLOW TEST:8.763 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:48:25.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:48:34.513: INFO: Waiting up to 5m0s for pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004" in namespace "e2e-tests-pods-hcbn5" to be "success or failure"
Dec 20 12:48:34.754: INFO: Pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 240.630549ms
Dec 20 12:48:36.848: INFO: Pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334569359s
Dec 20 12:48:38.878: INFO: Pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364441321s
Dec 20 12:48:41.125: INFO: Pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.611884377s
Dec 20 12:48:43.143: INFO: Pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.629314698s
Dec 20 12:48:45.164: INFO: Pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.650502667s
STEP: Saw pod success
Dec 20 12:48:45.164: INFO: Pod "client-envvars-08b28a55-2327-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:48:45.185: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-08b28a55-2327-11ea-851f-0242ac110004 container env3cont: 
STEP: delete the pod
Dec 20 12:48:45.301: INFO: Waiting for pod client-envvars-08b28a55-2327-11ea-851f-0242ac110004 to disappear
Dec 20 12:48:45.324: INFO: Pod client-envvars-08b28a55-2327-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:48:45.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hcbn5" for this suite.
Dec 20 12:49:39.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:49:39.558: INFO: namespace: e2e-tests-pods-hcbn5, resource: bindings, ignored listing per whitelist
Dec 20 12:49:39.619: INFO: namespace e2e-tests-pods-hcbn5 deletion completed in 54.265700177s

• [SLOW TEST:73.844 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:49:39.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 20 12:49:40.066: INFO: Waiting up to 5m0s for pod "pod-2fad8b76-2327-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-hwvd6" to be "success or failure"
Dec 20 12:49:40.098: INFO: Pod "pod-2fad8b76-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.302555ms
Dec 20 12:49:42.200: INFO: Pod "pod-2fad8b76-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133830019s
Dec 20 12:49:44.220: INFO: Pod "pod-2fad8b76-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153182695s
Dec 20 12:49:47.214: INFO: Pod "pod-2fad8b76-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.14786057s
Dec 20 12:49:49.269: INFO: Pod "pod-2fad8b76-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.202168939s
Dec 20 12:49:51.284: INFO: Pod "pod-2fad8b76-2327-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.217819558s
STEP: Saw pod success
Dec 20 12:49:51.284: INFO: Pod "pod-2fad8b76-2327-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:49:51.292: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2fad8b76-2327-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 12:49:51.458: INFO: Waiting for pod pod-2fad8b76-2327-11ea-851f-0242ac110004 to disappear
Dec 20 12:49:51.471: INFO: Pod pod-2fad8b76-2327-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:49:51.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hwvd6" for this suite.
Dec 20 12:49:57.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:49:57.785: INFO: namespace: e2e-tests-emptydir-hwvd6, resource: bindings, ignored listing per whitelist
Dec 20 12:49:57.830: INFO: namespace e2e-tests-emptydir-hwvd6 deletion completed in 6.233994546s

• [SLOW TEST:18.211 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:49:57.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:49:58.164: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 20 12:49:58.190: INFO: Number of nodes with available pods: 0
Dec 20 12:49:58.190: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 20 12:49:58.239: INFO: Number of nodes with available pods: 0
Dec 20 12:49:58.239: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:49:59.727: INFO: Number of nodes with available pods: 0
Dec 20 12:49:59.727: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:00.259: INFO: Number of nodes with available pods: 0
Dec 20 12:50:00.259: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:01.251: INFO: Number of nodes with available pods: 0
Dec 20 12:50:01.251: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:02.270: INFO: Number of nodes with available pods: 0
Dec 20 12:50:02.270: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:03.807: INFO: Number of nodes with available pods: 0
Dec 20 12:50:03.807: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:04.560: INFO: Number of nodes with available pods: 0
Dec 20 12:50:04.560: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:05.724: INFO: Number of nodes with available pods: 0
Dec 20 12:50:05.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:06.265: INFO: Number of nodes with available pods: 0
Dec 20 12:50:06.265: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:07.256: INFO: Number of nodes with available pods: 0
Dec 20 12:50:07.256: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:08.359: INFO: Number of nodes with available pods: 1
Dec 20 12:50:08.359: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 20 12:50:08.596: INFO: Number of nodes with available pods: 1
Dec 20 12:50:08.596: INFO: Number of running nodes: 0, number of available pods: 1
Dec 20 12:50:09.639: INFO: Number of nodes with available pods: 0
Dec 20 12:50:09.639: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 20 12:50:09.754: INFO: Number of nodes with available pods: 0
Dec 20 12:50:09.754: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:10.767: INFO: Number of nodes with available pods: 0
Dec 20 12:50:10.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:11.800: INFO: Number of nodes with available pods: 0
Dec 20 12:50:11.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:12.845: INFO: Number of nodes with available pods: 0
Dec 20 12:50:12.845: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:13.803: INFO: Number of nodes with available pods: 0
Dec 20 12:50:13.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:14.778: INFO: Number of nodes with available pods: 0
Dec 20 12:50:14.778: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:15.775: INFO: Number of nodes with available pods: 0
Dec 20 12:50:15.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:16.781: INFO: Number of nodes with available pods: 0
Dec 20 12:50:16.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:17.772: INFO: Number of nodes with available pods: 0
Dec 20 12:50:17.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:18.778: INFO: Number of nodes with available pods: 0
Dec 20 12:50:18.778: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:19.772: INFO: Number of nodes with available pods: 0
Dec 20 12:50:19.773: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:20.772: INFO: Number of nodes with available pods: 0
Dec 20 12:50:20.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:21.771: INFO: Number of nodes with available pods: 0
Dec 20 12:50:21.771: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:22.783: INFO: Number of nodes with available pods: 0
Dec 20 12:50:22.783: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:23.782: INFO: Number of nodes with available pods: 0
Dec 20 12:50:23.782: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:24.822: INFO: Number of nodes with available pods: 0
Dec 20 12:50:24.822: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:25.836: INFO: Number of nodes with available pods: 0
Dec 20 12:50:25.836: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:26.767: INFO: Number of nodes with available pods: 0
Dec 20 12:50:26.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:27.873: INFO: Number of nodes with available pods: 0
Dec 20 12:50:27.874: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:28.967: INFO: Number of nodes with available pods: 0
Dec 20 12:50:28.968: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:29.899: INFO: Number of nodes with available pods: 0
Dec 20 12:50:29.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:30.765: INFO: Number of nodes with available pods: 0
Dec 20 12:50:30.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:31.844: INFO: Number of nodes with available pods: 0
Dec 20 12:50:31.844: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 12:50:32.766: INFO: Number of nodes with available pods: 1
Dec 20 12:50:32.767: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wsm5v, will wait for the garbage collector to delete the pods
Dec 20 12:50:32.886: INFO: Deleting DaemonSet.extensions daemon-set took: 56.261539ms
Dec 20 12:50:33.087: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.771371ms
Dec 20 12:50:42.811: INFO: Number of nodes with available pods: 0
Dec 20 12:50:42.811: INFO: Number of running nodes: 0, number of available pods: 0
Dec 20 12:50:42.818: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wsm5v/daemonsets","resourceVersion":"15459563"},"items":null}

Dec 20 12:50:42.825: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wsm5v/pods","resourceVersion":"15459563"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:50:42.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-wsm5v" for this suite.
Dec 20 12:50:48.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:50:49.105: INFO: namespace: e2e-tests-daemonsets-wsm5v, resource: bindings, ignored listing per whitelist
Dec 20 12:50:49.155: INFO: namespace e2e-tests-daemonsets-wsm5v deletion completed in 6.261447581s

• [SLOW TEST:51.325 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:50:49.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:50:49.334: INFO: Creating ReplicaSet my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004
Dec 20 12:50:49.414: INFO: Pod name my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004: Found 0 pods out of 1
Dec 20 12:50:54.430: INFO: Pod name my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004: Found 1 pods out of 1
Dec 20 12:50:54.431: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004" is running
Dec 20 12:50:58.518: INFO: Pod "my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004-4bwbq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 12:50:49 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 12:50:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 12:50:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-20 12:50:49 +0000 UTC Reason: Message:}])
Dec 20 12:50:58.518: INFO: Trying to dial the pod
Dec 20 12:51:03.548: INFO: Controller my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004: Got expected result from replica 1 [my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004-4bwbq]: "my-hostname-basic-59131a37-2327-11ea-851f-0242ac110004-4bwbq", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:51:03.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-ck8bv" for this suite.
Dec 20 12:51:09.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:51:09.741: INFO: namespace: e2e-tests-replicaset-ck8bv, resource: bindings, ignored listing per whitelist
Dec 20 12:51:09.772: INFO: namespace e2e-tests-replicaset-ck8bv deletion completed in 6.216470884s

• [SLOW TEST:20.616 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:51:09.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-27dxs
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 20 12:51:10.146: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 20 12:51:48.548: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-27dxs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 12:51:48.548: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 12:51:49.028: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:51:49.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-27dxs" for this suite.
Dec 20 12:52:17.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:52:17.418: INFO: namespace: e2e-tests-pod-network-test-27dxs, resource: bindings, ignored listing per whitelist
Dec 20 12:52:17.430: INFO: namespace e2e-tests-pod-network-test-27dxs deletion completed in 28.381937748s

• [SLOW TEST:67.658 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:52:17.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 20 12:52:27.922: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:52:54.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zjktb" for this suite.
Dec 20 12:53:00.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:53:01.832: INFO: namespace: e2e-tests-namespaces-zjktb, resource: bindings, ignored listing per whitelist
Dec 20 12:53:01.916: INFO: namespace e2e-tests-namespaces-zjktb deletion completed in 7.196483351s
STEP: Destroying namespace "e2e-tests-nsdeletetest-rwglk" for this suite.
Dec 20 12:53:01.919: INFO: Namespace e2e-tests-nsdeletetest-rwglk was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-kj9t6" for this suite.
Dec 20 12:53:07.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:53:08.020: INFO: namespace: e2e-tests-nsdeletetest-kj9t6, resource: bindings, ignored listing per whitelist
Dec 20 12:53:08.121: INFO: namespace e2e-tests-nsdeletetest-kj9t6 deletion completed in 6.201968085s

• [SLOW TEST:50.691 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:53:08.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 20 12:53:08.396: INFO: Waiting up to 5m0s for pod "pod-abf05a94-2327-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-pqdv2" to be "success or failure"
Dec 20 12:53:08.428: INFO: Pod "pod-abf05a94-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.870357ms
Dec 20 12:53:10.466: INFO: Pod "pod-abf05a94-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070254681s
Dec 20 12:53:12.494: INFO: Pod "pod-abf05a94-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097906898s
Dec 20 12:53:14.664: INFO: Pod "pod-abf05a94-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267638314s
Dec 20 12:53:16.677: INFO: Pod "pod-abf05a94-2327-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28100447s
Dec 20 12:53:18.723: INFO: Pod "pod-abf05a94-2327-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.32700138s
STEP: Saw pod success
Dec 20 12:53:18.724: INFO: Pod "pod-abf05a94-2327-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:53:18.735: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-abf05a94-2327-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 12:53:18.804: INFO: Waiting for pod pod-abf05a94-2327-11ea-851f-0242ac110004 to disappear
Dec 20 12:53:18.963: INFO: Pod pod-abf05a94-2327-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:53:18.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pqdv2" for this suite.
Dec 20 12:53:25.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:53:25.050: INFO: namespace: e2e-tests-emptydir-pqdv2, resource: bindings, ignored listing per whitelist
Dec 20 12:53:25.175: INFO: namespace e2e-tests-emptydir-pqdv2 deletion completed in 6.200053415s

• [SLOW TEST:17.053 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:53:25.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-zbhh
STEP: Creating a pod to test atomic-volume-subpath
Dec 20 12:53:25.465: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zbhh" in namespace "e2e-tests-subpath-99tvb" to be "success or failure"
Dec 20 12:53:25.505: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 39.882497ms
Dec 20 12:53:27.817: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35247785s
Dec 20 12:53:29.840: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375088984s
Dec 20 12:53:32.088: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62325145s
Dec 20 12:53:34.111: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646455057s
Dec 20 12:53:36.120: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655365486s
Dec 20 12:53:38.138: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.672843383s
Dec 20 12:53:40.233: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.767963231s
Dec 20 12:53:42.243: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.778050991s
Dec 20 12:53:44.259: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 18.794063279s
Dec 20 12:53:46.276: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 20.810812958s
Dec 20 12:53:48.291: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 22.825856212s
Dec 20 12:53:50.305: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 24.840504887s
Dec 20 12:53:52.318: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 26.852944312s
Dec 20 12:53:54.349: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 28.883726019s
Dec 20 12:53:56.374: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 30.908859216s
Dec 20 12:53:58.390: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Running", Reason="", readiness=false. Elapsed: 32.925346666s
Dec 20 12:54:00.409: INFO: Pod "pod-subpath-test-configmap-zbhh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.943840677s
STEP: Saw pod success
Dec 20 12:54:00.409: INFO: Pod "pod-subpath-test-configmap-zbhh" satisfied condition "success or failure"
Dec 20 12:54:00.418: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-zbhh container test-container-subpath-configmap-zbhh: 
STEP: delete the pod
Dec 20 12:54:00.630: INFO: Waiting for pod pod-subpath-test-configmap-zbhh to disappear
Dec 20 12:54:00.647: INFO: Pod pod-subpath-test-configmap-zbhh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zbhh
Dec 20 12:54:00.647: INFO: Deleting pod "pod-subpath-test-configmap-zbhh" in namespace "e2e-tests-subpath-99tvb"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:54:00.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-99tvb" for this suite.
Dec 20 12:54:10.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:54:11.083: INFO: namespace: e2e-tests-subpath-99tvb, resource: bindings, ignored listing per whitelist
Dec 20 12:54:11.113: INFO: namespace e2e-tests-subpath-99tvb deletion completed in 10.432894137s

• [SLOW TEST:45.937 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:54:11.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:54:11.429: INFO: Creating deployment "test-recreate-deployment"
Dec 20 12:54:11.528: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 20 12:54:11.552: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 20 12:54:13.586: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 20 12:54:13.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:54:15.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:54:17.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:54:19.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:54:21.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443251, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 20 12:54:23.629: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 20 12:54:23.653: INFO: Updating deployment test-recreate-deployment
Dec 20 12:54:23.653: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 20 12:54:24.485: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-q9wxf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q9wxf/deployments/test-recreate-deployment,UID:d18a4e90-2327-11ea-a994-fa163e34d433,ResourceVersion:15460076,Generation:2,CreationTimestamp:2019-12-20 12:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-20 12:54:24 +0000 UTC 2019-12-20 12:54:24 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-20 12:54:24 +0000 UTC 2019-12-20 12:54:11 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 20 12:54:24.497: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-q9wxf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q9wxf/replicasets/test-recreate-deployment-589c4bfd,UID:d90fb6c0-2327-11ea-a994-fa163e34d433,ResourceVersion:15460073,Generation:1,CreationTimestamp:2019-12-20 12:54:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d18a4e90-2327-11ea-a994-fa163e34d433 0xc0017f176f 0xc0017f1780}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 12:54:24.497: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 20 12:54:24.497: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-q9wxf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q9wxf/replicasets/test-recreate-deployment-5bf7f65dc,UID:d19c94f3-2327-11ea-a994-fa163e34d433,ResourceVersion:15460065,Generation:2,CreationTimestamp:2019-12-20 12:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d18a4e90-2327-11ea-a994-fa163e34d433 0xc0017f18f0 0xc0017f18f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 20 12:54:24.508: INFO: Pod "test-recreate-deployment-589c4bfd-l7tfn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-l7tfn,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-q9wxf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q9wxf/pods/test-recreate-deployment-589c4bfd-l7tfn,UID:d927bb01-2327-11ea-a994-fa163e34d433,ResourceVersion:15460078,Generation:0,CreationTimestamp:2019-12-20 12:54:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd d90fb6c0-2327-11ea-a994-fa163e34d433 0xc0012deb7f 0xc0012deb90}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p6dvs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p6dvs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-p6dvs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012dec70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012dec90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:54:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:54:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:54:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:54:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-20 12:54:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:54:24.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-q9wxf" for this suite.
Dec 20 12:54:38.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:54:39.187: INFO: namespace: e2e-tests-deployment-q9wxf, resource: bindings, ignored listing per whitelist
Dec 20 12:54:39.569: INFO: namespace e2e-tests-deployment-q9wxf deletion completed in 14.951084676s

• [SLOW TEST:28.455 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:54:39.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-28h6c
I1220 12:54:40.146670       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-28h6c, replica count: 1
I1220 12:54:41.197323       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:42.197792       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:43.198262       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:44.198793       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:45.199208       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:46.199530       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:47.199860       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:48.200196       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:49.200565       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:50.201171       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:51.201904       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:52.202307       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:53.203025       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:54.204032       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:55.204808       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:56.205418       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1220 12:54:57.205801       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 20 12:54:58.573: INFO: Created: latency-svc-mb76m
Dec 20 12:54:59.154: INFO: Got endpoints: latency-svc-mb76m [1.848715188s]
Dec 20 12:54:59.431: INFO: Created: latency-svc-pw6gw
Dec 20 12:54:59.444: INFO: Got endpoints: latency-svc-pw6gw [289.33701ms]
Dec 20 12:54:59.514: INFO: Created: latency-svc-ltxgm
Dec 20 12:54:59.813: INFO: Got endpoints: latency-svc-ltxgm [657.311355ms]
Dec 20 12:54:59.857: INFO: Created: latency-svc-w9zhq
Dec 20 12:55:00.118: INFO: Got endpoints: latency-svc-w9zhq [963.316483ms]
Dec 20 12:55:00.136: INFO: Created: latency-svc-d8r5k
Dec 20 12:55:00.154: INFO: Got endpoints: latency-svc-d8r5k [998.473471ms]
Dec 20 12:55:00.629: INFO: Created: latency-svc-ds9b2
Dec 20 12:55:00.659: INFO: Got endpoints: latency-svc-ds9b2 [1.504024106s]
Dec 20 12:55:00.684: INFO: Created: latency-svc-tzf9k
Dec 20 12:55:00.897: INFO: Got endpoints: latency-svc-tzf9k [1.741467773s]
Dec 20 12:55:00.967: INFO: Created: latency-svc-tf4st
Dec 20 12:55:01.216: INFO: Got endpoints: latency-svc-tf4st [2.060789039s]
Dec 20 12:55:01.253: INFO: Created: latency-svc-fnp8n
Dec 20 12:55:01.266: INFO: Got endpoints: latency-svc-fnp8n [2.109577932s]
Dec 20 12:55:01.572: INFO: Created: latency-svc-fwp2v
Dec 20 12:55:01.605: INFO: Got endpoints: latency-svc-fwp2v [2.449829284s]
Dec 20 12:55:01.879: INFO: Created: latency-svc-cxcrx
Dec 20 12:55:02.325: INFO: Got endpoints: latency-svc-cxcrx [3.169444558s]
Dec 20 12:55:02.330: INFO: Created: latency-svc-gs5ql
Dec 20 12:55:02.907: INFO: Got endpoints: latency-svc-gs5ql [3.750880956s]
Dec 20 12:55:02.959: INFO: Created: latency-svc-78r5s
Dec 20 12:55:03.477: INFO: Got endpoints: latency-svc-78r5s [4.320562294s]
Dec 20 12:55:03.566: INFO: Created: latency-svc-zlntc
Dec 20 12:55:03.889: INFO: Got endpoints: latency-svc-zlntc [4.732622281s]
Dec 20 12:55:03.915: INFO: Created: latency-svc-kc4lz
Dec 20 12:55:03.927: INFO: Got endpoints: latency-svc-kc4lz [4.771083296s]
Dec 20 12:55:04.469: INFO: Created: latency-svc-92lbz
Dec 20 12:55:04.737: INFO: Got endpoints: latency-svc-92lbz [5.580624169s]
Dec 20 12:55:04.770: INFO: Created: latency-svc-2nwbg
Dec 20 12:55:04.775: INFO: Got endpoints: latency-svc-2nwbg [5.330838908s]
Dec 20 12:55:05.112: INFO: Created: latency-svc-hn2fp
Dec 20 12:55:05.143: INFO: Got endpoints: latency-svc-hn2fp [5.329243932s]
Dec 20 12:55:05.417: INFO: Created: latency-svc-mlflh
Dec 20 12:55:05.417: INFO: Got endpoints: latency-svc-mlflh [5.298812654s]
Dec 20 12:55:05.761: INFO: Created: latency-svc-lfzkx
Dec 20 12:55:05.787: INFO: Got endpoints: latency-svc-lfzkx [5.632161191s]
Dec 20 12:55:06.060: INFO: Created: latency-svc-bz985
Dec 20 12:55:06.906: INFO: Got endpoints: latency-svc-bz985 [6.247334448s]
Dec 20 12:55:06.935: INFO: Created: latency-svc-rxxc7
Dec 20 12:55:06.939: INFO: Got endpoints: latency-svc-rxxc7 [6.041229078s]
Dec 20 12:55:07.314: INFO: Created: latency-svc-k2cq9
Dec 20 12:55:07.347: INFO: Got endpoints: latency-svc-k2cq9 [6.130367502s]
Dec 20 12:55:07.566: INFO: Created: latency-svc-9vq6d
Dec 20 12:55:07.575: INFO: Got endpoints: latency-svc-9vq6d [6.309303038s]
Dec 20 12:55:07.645: INFO: Created: latency-svc-5tbvl
Dec 20 12:55:07.818: INFO: Got endpoints: latency-svc-5tbvl [6.211895161s]
Dec 20 12:55:07.835: INFO: Created: latency-svc-z8tvl
Dec 20 12:55:07.869: INFO: Got endpoints: latency-svc-z8tvl [5.542812857s]
Dec 20 12:55:08.076: INFO: Created: latency-svc-554vg
Dec 20 12:55:08.107: INFO: Got endpoints: latency-svc-554vg [5.199926094s]
Dec 20 12:55:08.293: INFO: Created: latency-svc-bjbw6
Dec 20 12:55:08.344: INFO: Got endpoints: latency-svc-bjbw6 [4.866742748s]
Dec 20 12:55:08.607: INFO: Created: latency-svc-v9m8x
Dec 20 12:55:08.608: INFO: Got endpoints: latency-svc-v9m8x [4.71898238s]
Dec 20 12:55:08.887: INFO: Created: latency-svc-htqxm
Dec 20 12:55:08.913: INFO: Got endpoints: latency-svc-htqxm [4.985683629s]
Dec 20 12:55:09.147: INFO: Created: latency-svc-46lc7
Dec 20 12:55:09.230: INFO: Created: latency-svc-hxfj9
Dec 20 12:55:09.431: INFO: Got endpoints: latency-svc-hxfj9 [4.655700984s]
Dec 20 12:55:09.431: INFO: Got endpoints: latency-svc-46lc7 [4.694103106s]
Dec 20 12:55:09.472: INFO: Created: latency-svc-lmgcx
Dec 20 12:55:09.480: INFO: Got endpoints: latency-svc-lmgcx [4.336953918s]
Dec 20 12:55:09.661: INFO: Created: latency-svc-kt4cl
Dec 20 12:55:09.688: INFO: Got endpoints: latency-svc-kt4cl [4.270380374s]
Dec 20 12:55:09.740: INFO: Created: latency-svc-kj8k9
Dec 20 12:55:09.960: INFO: Created: latency-svc-svnm7
Dec 20 12:55:09.963: INFO: Got endpoints: latency-svc-kj8k9 [4.17612806s]
Dec 20 12:55:09.982: INFO: Got endpoints: latency-svc-svnm7 [3.07525381s]
Dec 20 12:55:10.164: INFO: Created: latency-svc-kcwv2
Dec 20 12:55:10.195: INFO: Got endpoints: latency-svc-kcwv2 [3.256204052s]
Dec 20 12:55:10.429: INFO: Created: latency-svc-9blpp
Dec 20 12:55:10.447: INFO: Got endpoints: latency-svc-9blpp [3.100166987s]
Dec 20 12:55:10.665: INFO: Created: latency-svc-srrn4
Dec 20 12:55:10.703: INFO: Got endpoints: latency-svc-srrn4 [3.127607745s]
Dec 20 12:55:10.867: INFO: Created: latency-svc-vjltc
Dec 20 12:55:10.925: INFO: Got endpoints: latency-svc-vjltc [3.107442577s]
Dec 20 12:55:11.085: INFO: Created: latency-svc-xcdgc
Dec 20 12:55:11.091: INFO: Got endpoints: latency-svc-xcdgc [3.222067722s]
Dec 20 12:55:11.152: INFO: Created: latency-svc-8zpqt
Dec 20 12:55:11.284: INFO: Got endpoints: latency-svc-8zpqt [3.177373265s]
Dec 20 12:55:11.311: INFO: Created: latency-svc-f2gtt
Dec 20 12:55:11.355: INFO: Got endpoints: latency-svc-f2gtt [3.011227555s]
Dec 20 12:55:11.569: INFO: Created: latency-svc-xmrdq
Dec 20 12:55:11.588: INFO: Got endpoints: latency-svc-xmrdq [303.203029ms]
Dec 20 12:55:11.890: INFO: Created: latency-svc-65fcx
Dec 20 12:55:11.912: INFO: Got endpoints: latency-svc-65fcx [3.304360629s]
Dec 20 12:55:12.281: INFO: Created: latency-svc-czmkf
Dec 20 12:55:12.446: INFO: Got endpoints: latency-svc-czmkf [3.532928757s]
Dec 20 12:55:12.517: INFO: Created: latency-svc-kvmtd
Dec 20 12:55:12.739: INFO: Got endpoints: latency-svc-kvmtd [3.307277065s]
Dec 20 12:55:12.771: INFO: Created: latency-svc-r62df
Dec 20 12:55:13.092: INFO: Got endpoints: latency-svc-r62df [3.660798471s]
Dec 20 12:55:13.111: INFO: Created: latency-svc-gxt6v
Dec 20 12:55:13.148: INFO: Got endpoints: latency-svc-gxt6v [3.667331829s]
Dec 20 12:55:13.404: INFO: Created: latency-svc-zqmcf
Dec 20 12:55:13.406: INFO: Got endpoints: latency-svc-zqmcf [3.718351782s]
Dec 20 12:55:13.471: INFO: Created: latency-svc-xx62z
Dec 20 12:55:13.647: INFO: Got endpoints: latency-svc-xx62z [3.684040242s]
Dec 20 12:55:13.677: INFO: Created: latency-svc-7zjt4
Dec 20 12:55:13.697: INFO: Got endpoints: latency-svc-7zjt4 [3.714688147s]
Dec 20 12:55:13.924: INFO: Created: latency-svc-4x7kq
Dec 20 12:55:13.944: INFO: Got endpoints: latency-svc-4x7kq [3.748333271s]
Dec 20 12:55:14.025: INFO: Created: latency-svc-xx4fq
Dec 20 12:55:14.187: INFO: Got endpoints: latency-svc-xx4fq [3.739937561s]
Dec 20 12:55:14.236: INFO: Created: latency-svc-vczhp
Dec 20 12:55:14.466: INFO: Got endpoints: latency-svc-vczhp [3.76249876s]
Dec 20 12:55:14.575: INFO: Created: latency-svc-8g8l9
Dec 20 12:55:14.691: INFO: Got endpoints: latency-svc-8g8l9 [3.765412634s]
Dec 20 12:55:14.758: INFO: Created: latency-svc-76mnp
Dec 20 12:55:14.783: INFO: Got endpoints: latency-svc-76mnp [3.692194804s]
Dec 20 12:55:14.952: INFO: Created: latency-svc-c75d6
Dec 20 12:55:14.966: INFO: Got endpoints: latency-svc-c75d6 [3.611213384s]
Dec 20 12:55:15.160: INFO: Created: latency-svc-nnwzg
Dec 20 12:55:15.188: INFO: Got endpoints: latency-svc-nnwzg [3.599657694s]
Dec 20 12:55:15.356: INFO: Created: latency-svc-wlj5t
Dec 20 12:55:15.395: INFO: Got endpoints: latency-svc-wlj5t [3.48294646s]
Dec 20 12:55:15.510: INFO: Created: latency-svc-smj6j
Dec 20 12:55:15.542: INFO: Got endpoints: latency-svc-smj6j [3.096001371s]
Dec 20 12:55:15.618: INFO: Created: latency-svc-7s7p6
Dec 20 12:55:15.792: INFO: Got endpoints: latency-svc-7s7p6 [3.053360445s]
Dec 20 12:55:15.809: INFO: Created: latency-svc-zfm4p
Dec 20 12:55:15.876: INFO: Got endpoints: latency-svc-zfm4p [2.783812228s]
Dec 20 12:55:16.148: INFO: Created: latency-svc-qp4p8
Dec 20 12:55:16.337: INFO: Got endpoints: latency-svc-qp4p8 [3.18871502s]
Dec 20 12:55:16.345: INFO: Created: latency-svc-cm45v
Dec 20 12:55:16.380: INFO: Got endpoints: latency-svc-cm45v [2.973434382s]
Dec 20 12:55:16.604: INFO: Created: latency-svc-l558d
Dec 20 12:55:16.604: INFO: Got endpoints: latency-svc-l558d [2.957066012s]
Dec 20 12:55:16.930: INFO: Created: latency-svc-k46ks
Dec 20 12:55:17.010: INFO: Got endpoints: latency-svc-k46ks [3.313366483s]
Dec 20 12:55:17.783: INFO: Created: latency-svc-jmd6n
Dec 20 12:55:17.873: INFO: Got endpoints: latency-svc-jmd6n [3.928954803s]
Dec 20 12:55:18.172: INFO: Created: latency-svc-7gjll
Dec 20 12:55:18.308: INFO: Got endpoints: latency-svc-7gjll [4.12034094s]
Dec 20 12:55:18.347: INFO: Created: latency-svc-tkv5g
Dec 20 12:55:18.363: INFO: Got endpoints: latency-svc-tkv5g [3.896988494s]
Dec 20 12:55:18.590: INFO: Created: latency-svc-ppf6z
Dec 20 12:55:18.898: INFO: Got endpoints: latency-svc-ppf6z [4.207029539s]
Dec 20 12:55:19.099: INFO: Created: latency-svc-tr598
Dec 20 12:55:19.422: INFO: Got endpoints: latency-svc-tr598 [4.638394508s]
Dec 20 12:55:19.515: INFO: Created: latency-svc-jt2rv
Dec 20 12:55:19.811: INFO: Created: latency-svc-mlprh
Dec 20 12:55:20.035: INFO: Got endpoints: latency-svc-jt2rv [5.067954593s]
Dec 20 12:55:20.089: INFO: Got endpoints: latency-svc-mlprh [4.900837999s]
Dec 20 12:55:20.615: INFO: Created: latency-svc-xtljl
Dec 20 12:55:20.659: INFO: Got endpoints: latency-svc-xtljl [5.263597955s]
Dec 20 12:55:20.945: INFO: Created: latency-svc-94g7v
Dec 20 12:55:21.128: INFO: Got endpoints: latency-svc-94g7v [5.58579507s]
Dec 20 12:55:21.409: INFO: Created: latency-svc-4c4f4
Dec 20 12:55:21.414: INFO: Got endpoints: latency-svc-4c4f4 [5.621242484s]
Dec 20 12:55:22.245: INFO: Created: latency-svc-nvhcm
Dec 20 12:55:22.252: INFO: Got endpoints: latency-svc-nvhcm [6.37506601s]
Dec 20 12:55:23.747: INFO: Created: latency-svc-n98zc
Dec 20 12:55:23.796: INFO: Got endpoints: latency-svc-n98zc [7.458755476s]
Dec 20 12:55:24.096: INFO: Created: latency-svc-h5xs8
Dec 20 12:55:24.193: INFO: Got endpoints: latency-svc-h5xs8 [7.81330721s]
Dec 20 12:55:24.296: INFO: Created: latency-svc-qthrr
Dec 20 12:55:24.463: INFO: Got endpoints: latency-svc-qthrr [7.858504588s]
Dec 20 12:55:24.790: INFO: Created: latency-svc-lr2rx
Dec 20 12:55:25.110: INFO: Got endpoints: latency-svc-lr2rx [8.09952077s]
Dec 20 12:55:25.117: INFO: Created: latency-svc-6tw9q
Dec 20 12:55:25.132: INFO: Got endpoints: latency-svc-6tw9q [7.259472903s]
Dec 20 12:55:25.445: INFO: Created: latency-svc-jfvlm
Dec 20 12:55:25.622: INFO: Got endpoints: latency-svc-jfvlm [7.314158157s]
Dec 20 12:55:25.686: INFO: Created: latency-svc-v5ngk
Dec 20 12:55:25.699: INFO: Got endpoints: latency-svc-v5ngk [7.336084738s]
Dec 20 12:55:25.861: INFO: Created: latency-svc-979dv
Dec 20 12:55:26.085: INFO: Got endpoints: latency-svc-979dv [7.186270686s]
Dec 20 12:55:26.429: INFO: Created: latency-svc-l2zjl
Dec 20 12:55:26.435: INFO: Got endpoints: latency-svc-l2zjl [7.012854662s]
Dec 20 12:55:26.933: INFO: Created: latency-svc-8wlgf
Dec 20 12:55:26.948: INFO: Got endpoints: latency-svc-8wlgf [6.913351902s]
Dec 20 12:55:27.207: INFO: Created: latency-svc-tv6w7
Dec 20 12:55:27.207: INFO: Got endpoints: latency-svc-tv6w7 [7.118429288s]
Dec 20 12:55:27.647: INFO: Created: latency-svc-9ftvz
Dec 20 12:55:27.668: INFO: Got endpoints: latency-svc-9ftvz [7.007969766s]
Dec 20 12:55:28.818: INFO: Created: latency-svc-nk7r5
Dec 20 12:55:28.828: INFO: Got endpoints: latency-svc-nk7r5 [7.699208738s]
Dec 20 12:55:29.406: INFO: Created: latency-svc-4v6kb
Dec 20 12:55:29.436: INFO: Got endpoints: latency-svc-4v6kb [8.021944409s]
Dec 20 12:55:29.491: INFO: Created: latency-svc-4gtvw
Dec 20 12:55:29.579: INFO: Got endpoints: latency-svc-4gtvw [7.326324715s]
Dec 20 12:55:29.640: INFO: Created: latency-svc-2gdmt
Dec 20 12:55:29.817: INFO: Created: latency-svc-9k8dn
Dec 20 12:55:29.823: INFO: Got endpoints: latency-svc-2gdmt [6.027473317s]
Dec 20 12:55:29.845: INFO: Got endpoints: latency-svc-9k8dn [5.651492762s]
Dec 20 12:55:30.176: INFO: Created: latency-svc-sv8lb
Dec 20 12:55:30.530: INFO: Created: latency-svc-b66px
Dec 20 12:55:30.744: INFO: Got endpoints: latency-svc-sv8lb [6.281089356s]
Dec 20 12:55:30.768: INFO: Got endpoints: latency-svc-b66px [5.658045697s]
Dec 20 12:55:30.978: INFO: Created: latency-svc-h2f52
Dec 20 12:55:30.978: INFO: Got endpoints: latency-svc-h2f52 [5.845626783s]
Dec 20 12:55:31.136: INFO: Created: latency-svc-d7xvz
Dec 20 12:55:31.169: INFO: Got endpoints: latency-svc-d7xvz [5.545962751s]
Dec 20 12:55:33.730: INFO: Created: latency-svc-4rx4c
Dec 20 12:55:33.765: INFO: Got endpoints: latency-svc-4rx4c [8.064969518s]
Dec 20 12:55:34.132: INFO: Created: latency-svc-mzf9c
Dec 20 12:55:34.152: INFO: Got endpoints: latency-svc-mzf9c [8.066583801s]
Dec 20 12:55:34.504: INFO: Created: latency-svc-6wh9q
Dec 20 12:55:34.541: INFO: Got endpoints: latency-svc-6wh9q [8.105675442s]
Dec 20 12:55:34.648: INFO: Created: latency-svc-2dkgf
Dec 20 12:55:34.673: INFO: Got endpoints: latency-svc-2dkgf [7.724815014s]
Dec 20 12:55:35.037: INFO: Created: latency-svc-gj7j5
Dec 20 12:55:35.126: INFO: Got endpoints: latency-svc-gj7j5 [7.918621876s]
Dec 20 12:55:35.283: INFO: Created: latency-svc-nlhm4
Dec 20 12:55:35.294: INFO: Got endpoints: latency-svc-nlhm4 [7.625802996s]
Dec 20 12:55:35.459: INFO: Created: latency-svc-kq5fj
Dec 20 12:55:35.476: INFO: Got endpoints: latency-svc-kq5fj [6.647607739s]
Dec 20 12:55:35.654: INFO: Created: latency-svc-hrcsc
Dec 20 12:55:35.903: INFO: Got endpoints: latency-svc-hrcsc [6.466782334s]
Dec 20 12:55:35.953: INFO: Created: latency-svc-4pqxc
Dec 20 12:55:35.979: INFO: Got endpoints: latency-svc-4pqxc [6.400006233s]
Dec 20 12:55:36.151: INFO: Created: latency-svc-kjwsn
Dec 20 12:55:36.236: INFO: Got endpoints: latency-svc-kjwsn [6.412684523s]
Dec 20 12:55:36.308: INFO: Created: latency-svc-qmv7s
Dec 20 12:55:36.321: INFO: Got endpoints: latency-svc-qmv7s [6.475301073s]
Dec 20 12:55:36.435: INFO: Created: latency-svc-njrnh
Dec 20 12:55:36.502: INFO: Got endpoints: latency-svc-njrnh [5.757296605s]
Dec 20 12:55:36.630: INFO: Created: latency-svc-49xql
Dec 20 12:55:36.655: INFO: Got endpoints: latency-svc-49xql [5.886603895s]
Dec 20 12:55:36.839: INFO: Created: latency-svc-vrkvv
Dec 20 12:55:36.907: INFO: Got endpoints: latency-svc-vrkvv [5.92848242s]
Dec 20 12:55:36.919: INFO: Created: latency-svc-jpqxl
Dec 20 12:55:37.022: INFO: Got endpoints: latency-svc-jpqxl [5.852848792s]
Dec 20 12:55:37.041: INFO: Created: latency-svc-nxwk8
Dec 20 12:55:37.060: INFO: Got endpoints: latency-svc-nxwk8 [3.295644947s]
Dec 20 12:55:37.116: INFO: Created: latency-svc-c9rkz
Dec 20 12:55:37.324: INFO: Got endpoints: latency-svc-c9rkz [3.172328137s]
Dec 20 12:55:38.146: INFO: Created: latency-svc-86vb7
Dec 20 12:55:38.170: INFO: Got endpoints: latency-svc-86vb7 [3.629068257s]
Dec 20 12:55:38.211: INFO: Created: latency-svc-mfb7b
Dec 20 12:55:38.218: INFO: Got endpoints: latency-svc-mfb7b [3.545031258s]
Dec 20 12:55:38.379: INFO: Created: latency-svc-x4skx
Dec 20 12:55:38.402: INFO: Got endpoints: latency-svc-x4skx [3.275170168s]
Dec 20 12:55:38.639: INFO: Created: latency-svc-vrs2n
Dec 20 12:55:38.654: INFO: Got endpoints: latency-svc-vrs2n [3.360656413s]
Dec 20 12:55:38.696: INFO: Created: latency-svc-6kzl9
Dec 20 12:55:38.717: INFO: Got endpoints: latency-svc-6kzl9 [3.241620687s]
Dec 20 12:55:38.858: INFO: Created: latency-svc-mlm2j
Dec 20 12:55:38.873: INFO: Got endpoints: latency-svc-mlm2j [2.969679886s]
Dec 20 12:55:39.036: INFO: Created: latency-svc-wkxvj
Dec 20 12:55:39.055: INFO: Got endpoints: latency-svc-wkxvj [3.075774587s]
Dec 20 12:55:39.100: INFO: Created: latency-svc-5bclc
Dec 20 12:55:39.107: INFO: Got endpoints: latency-svc-5bclc [2.870555483s]
Dec 20 12:55:39.234: INFO: Created: latency-svc-zqghk
Dec 20 12:55:39.255: INFO: Got endpoints: latency-svc-zqghk [2.933469333s]
Dec 20 12:55:39.393: INFO: Created: latency-svc-7nhlc
Dec 20 12:55:39.404: INFO: Got endpoints: latency-svc-7nhlc [2.901878666s]
Dec 20 12:55:39.451: INFO: Created: latency-svc-lsf6z
Dec 20 12:55:39.471: INFO: Got endpoints: latency-svc-lsf6z [2.815163613s]
Dec 20 12:55:39.618: INFO: Created: latency-svc-n8wzp
Dec 20 12:55:39.638: INFO: Got endpoints: latency-svc-n8wzp [2.730830618s]
Dec 20 12:55:39.778: INFO: Created: latency-svc-dlp7b
Dec 20 12:55:39.822: INFO: Got endpoints: latency-svc-dlp7b [2.800247889s]
Dec 20 12:55:40.061: INFO: Created: latency-svc-k2vs5
Dec 20 12:55:40.183: INFO: Got endpoints: latency-svc-k2vs5 [3.121805579s]
Dec 20 12:55:40.196: INFO: Created: latency-svc-plswr
Dec 20 12:55:40.203: INFO: Got endpoints: latency-svc-plswr [2.87871552s]
Dec 20 12:55:40.422: INFO: Created: latency-svc-6l59s
Dec 20 12:55:40.476: INFO: Created: latency-svc-2nmxg
Dec 20 12:55:40.477: INFO: Got endpoints: latency-svc-6l59s [2.306573412s]
Dec 20 12:55:40.591: INFO: Got endpoints: latency-svc-2nmxg [2.372909716s]
Dec 20 12:55:40.618: INFO: Created: latency-svc-mvbtn
Dec 20 12:55:40.653: INFO: Got endpoints: latency-svc-mvbtn [2.251401788s]
Dec 20 12:55:40.779: INFO: Created: latency-svc-fxmsg
Dec 20 12:55:40.838: INFO: Created: latency-svc-ss5cf
Dec 20 12:55:40.855: INFO: Got endpoints: latency-svc-ss5cf [2.137811773s]
Dec 20 12:55:40.987: INFO: Got endpoints: latency-svc-fxmsg [2.332016696s]
Dec 20 12:55:41.044: INFO: Created: latency-svc-tvk6s
Dec 20 12:55:41.080: INFO: Got endpoints: latency-svc-tvk6s [2.206681144s]
Dec 20 12:55:41.170: INFO: Created: latency-svc-b877w
Dec 20 12:55:41.354: INFO: Got endpoints: latency-svc-b877w [2.299288261s]
Dec 20 12:55:41.367: INFO: Created: latency-svc-dprk5
Dec 20 12:55:41.367: INFO: Got endpoints: latency-svc-dprk5 [2.260386473s]
Dec 20 12:55:41.460: INFO: Created: latency-svc-ttrrl
Dec 20 12:55:41.528: INFO: Got endpoints: latency-svc-ttrrl [2.272738557s]
Dec 20 12:55:41.549: INFO: Created: latency-svc-z4h85
Dec 20 12:55:41.574: INFO: Got endpoints: latency-svc-z4h85 [2.169241857s]
Dec 20 12:55:41.710: INFO: Created: latency-svc-wtjrx
Dec 20 12:55:41.751: INFO: Got endpoints: latency-svc-wtjrx [2.280265932s]
Dec 20 12:55:41.946: INFO: Created: latency-svc-k98f8
Dec 20 12:55:41.960: INFO: Got endpoints: latency-svc-k98f8 [2.321965253s]
Dec 20 12:55:42.043: INFO: Created: latency-svc-pxv6s
Dec 20 12:55:42.192: INFO: Got endpoints: latency-svc-pxv6s [2.369465614s]
Dec 20 12:55:42.263: INFO: Created: latency-svc-t26bc
Dec 20 12:55:42.547: INFO: Got endpoints: latency-svc-t26bc [2.364072274s]
Dec 20 12:55:42.609: INFO: Created: latency-svc-qpcf2
Dec 20 12:55:42.787: INFO: Got endpoints: latency-svc-qpcf2 [2.583713861s]
Dec 20 12:55:42.845: INFO: Created: latency-svc-tz2kj
Dec 20 12:55:43.126: INFO: Got endpoints: latency-svc-tz2kj [2.649502814s]
Dec 20 12:55:43.155: INFO: Created: latency-svc-spx27
Dec 20 12:55:43.192: INFO: Got endpoints: latency-svc-spx27 [2.600051494s]
Dec 20 12:55:43.399: INFO: Created: latency-svc-8msh9
Dec 20 12:55:43.421: INFO: Got endpoints: latency-svc-8msh9 [2.767622985s]
Dec 20 12:55:43.474: INFO: Created: latency-svc-zjgsb
Dec 20 12:55:43.583: INFO: Got endpoints: latency-svc-zjgsb [2.727594361s]
Dec 20 12:55:43.631: INFO: Created: latency-svc-4ntzd
Dec 20 12:55:43.659: INFO: Got endpoints: latency-svc-4ntzd [2.672170432s]
Dec 20 12:55:43.856: INFO: Created: latency-svc-2fh8j
Dec 20 12:55:43.902: INFO: Got endpoints: latency-svc-2fh8j [2.821555696s]
Dec 20 12:55:44.011: INFO: Created: latency-svc-qwzkl
Dec 20 12:55:44.048: INFO: Got endpoints: latency-svc-qwzkl [2.693718921s]
Dec 20 12:55:44.232: INFO: Created: latency-svc-kn46c
Dec 20 12:55:44.276: INFO: Got endpoints: latency-svc-kn46c [2.90797676s]
Dec 20 12:55:44.343: INFO: Created: latency-svc-x872k
Dec 20 12:55:44.452: INFO: Got endpoints: latency-svc-x872k [2.923921618s]
Dec 20 12:55:44.503: INFO: Created: latency-svc-rv88q
Dec 20 12:55:44.519: INFO: Got endpoints: latency-svc-rv88q [2.944835429s]
Dec 20 12:55:44.631: INFO: Created: latency-svc-cg85q
Dec 20 12:55:44.653: INFO: Got endpoints: latency-svc-cg85q [2.901169666s]
Dec 20 12:55:44.697: INFO: Created: latency-svc-w7zs5
Dec 20 12:55:44.706: INFO: Got endpoints: latency-svc-w7zs5 [2.745089s]
Dec 20 12:55:44.799: INFO: Created: latency-svc-728bl
Dec 20 12:55:44.812: INFO: Got endpoints: latency-svc-728bl [2.618918222s]
Dec 20 12:55:44.872: INFO: Created: latency-svc-tpx99
Dec 20 12:55:45.017: INFO: Got endpoints: latency-svc-tpx99 [2.470407355s]
Dec 20 12:55:45.057: INFO: Created: latency-svc-pxjhf
Dec 20 12:55:45.204: INFO: Got endpoints: latency-svc-pxjhf [2.416854904s]
Dec 20 12:55:45.218: INFO: Created: latency-svc-zg6zj
Dec 20 12:55:45.220: INFO: Got endpoints: latency-svc-zg6zj [2.093048667s]
Dec 20 12:55:45.256: INFO: Created: latency-svc-vw5x7
Dec 20 12:55:45.269: INFO: Got endpoints: latency-svc-vw5x7 [2.076777952s]
Dec 20 12:55:45.436: INFO: Created: latency-svc-xszqg
Dec 20 12:55:45.450: INFO: Got endpoints: latency-svc-xszqg [2.02854294s]
Dec 20 12:55:45.576: INFO: Created: latency-svc-tr9rc
Dec 20 12:55:45.585: INFO: Got endpoints: latency-svc-tr9rc [2.001550608s]
Dec 20 12:55:45.641: INFO: Created: latency-svc-bb45z
Dec 20 12:55:45.806: INFO: Got endpoints: latency-svc-bb45z [2.146226969s]
Dec 20 12:55:45.824: INFO: Created: latency-svc-tdglq
Dec 20 12:55:45.834: INFO: Got endpoints: latency-svc-tdglq [1.932223852s]
Dec 20 12:55:46.009: INFO: Created: latency-svc-69d6t
Dec 20 12:55:46.045: INFO: Got endpoints: latency-svc-69d6t [1.996660191s]
Dec 20 12:55:46.116: INFO: Created: latency-svc-5pwgj
Dec 20 12:55:46.228: INFO: Got endpoints: latency-svc-5pwgj [1.952785539s]
Dec 20 12:55:46.251: INFO: Created: latency-svc-hskqw
Dec 20 12:55:46.279: INFO: Got endpoints: latency-svc-hskqw [1.827311103s]
Dec 20 12:55:46.427: INFO: Created: latency-svc-6qdgh
Dec 20 12:55:46.427: INFO: Got endpoints: latency-svc-6qdgh [1.907669319s]
Dec 20 12:55:46.499: INFO: Created: latency-svc-j6rzq
Dec 20 12:55:46.585: INFO: Got endpoints: latency-svc-j6rzq [1.931623681s]
Dec 20 12:55:46.600: INFO: Created: latency-svc-v4q22
Dec 20 12:55:46.652: INFO: Got endpoints: latency-svc-v4q22 [1.946600968s]
Dec 20 12:55:46.797: INFO: Created: latency-svc-2lzbc
Dec 20 12:55:46.824: INFO: Got endpoints: latency-svc-2lzbc [2.012384689s]
Dec 20 12:55:46.843: INFO: Created: latency-svc-9dqdl
Dec 20 12:55:46.860: INFO: Got endpoints: latency-svc-9dqdl [1.842313977s]
Dec 20 12:55:46.976: INFO: Created: latency-svc-x76hr
Dec 20 12:55:46.977: INFO: Got endpoints: latency-svc-x76hr [1.772326545s]
Dec 20 12:55:47.016: INFO: Created: latency-svc-gqljb
Dec 20 12:55:47.172: INFO: Got endpoints: latency-svc-gqljb [1.952270865s]
Dec 20 12:55:47.227: INFO: Created: latency-svc-zpwzg
Dec 20 12:55:47.451: INFO: Got endpoints: latency-svc-zpwzg [2.18250691s]
Dec 20 12:55:47.547: INFO: Created: latency-svc-wbhsz
Dec 20 12:55:47.670: INFO: Got endpoints: latency-svc-wbhsz [2.220281539s]
Dec 20 12:55:47.715: INFO: Created: latency-svc-2xbzl
Dec 20 12:55:47.733: INFO: Got endpoints: latency-svc-2xbzl [2.147845201s]
Dec 20 12:55:47.840: INFO: Created: latency-svc-7ckfk
Dec 20 12:55:47.867: INFO: Got endpoints: latency-svc-7ckfk [2.061505605s]
Dec 20 12:55:48.108: INFO: Created: latency-svc-28s2g
Dec 20 12:55:48.122: INFO: Got endpoints: latency-svc-28s2g [2.287455351s]
Dec 20 12:55:48.195: INFO: Created: latency-svc-6g85z
Dec 20 12:55:48.357: INFO: Got endpoints: latency-svc-6g85z [2.311423874s]
Dec 20 12:55:48.557: INFO: Created: latency-svc-j85km
Dec 20 12:55:48.740: INFO: Got endpoints: latency-svc-j85km [2.511043504s]
Dec 20 12:55:48.755: INFO: Created: latency-svc-8d69m
Dec 20 12:55:48.777: INFO: Got endpoints: latency-svc-8d69m [2.497201082s]
Dec 20 12:55:48.988: INFO: Created: latency-svc-5rcmp
Dec 20 12:55:49.444: INFO: Got endpoints: latency-svc-5rcmp [3.017127156s]
Dec 20 12:55:49.674: INFO: Created: latency-svc-8gcx6
Dec 20 12:55:49.700: INFO: Got endpoints: latency-svc-8gcx6 [3.114616423s]
Dec 20 12:55:49.703: INFO: Created: latency-svc-v4bsl
Dec 20 12:55:49.712: INFO: Got endpoints: latency-svc-v4bsl [3.059365304s]
Dec 20 12:55:49.978: INFO: Created: latency-svc-xkswt
Dec 20 12:55:50.019: INFO: Got endpoints: latency-svc-xkswt [3.194733234s]
Dec 20 12:55:50.222: INFO: Created: latency-svc-z7q5r
Dec 20 12:55:50.255: INFO: Got endpoints: latency-svc-z7q5r [3.394747562s]
Dec 20 12:55:50.591: INFO: Created: latency-svc-fkb5t
Dec 20 12:55:50.621: INFO: Got endpoints: latency-svc-fkb5t [3.643689153s]
Dec 20 12:55:50.915: INFO: Created: latency-svc-c6stz
Dec 20 12:55:51.189: INFO: Got endpoints: latency-svc-c6stz [4.015743555s]
Dec 20 12:55:51.207: INFO: Created: latency-svc-m7rlk
Dec 20 12:55:51.215: INFO: Got endpoints: latency-svc-m7rlk [3.763198293s]
Dec 20 12:55:51.439: INFO: Created: latency-svc-6b7zw
Dec 20 12:55:51.440: INFO: Got endpoints: latency-svc-6b7zw [3.769147973s]
Dec 20 12:55:51.500: INFO: Created: latency-svc-th744
Dec 20 12:55:51.773: INFO: Got endpoints: latency-svc-th744 [4.040156444s]
Dec 20 12:55:52.148: INFO: Created: latency-svc-2dd2v
Dec 20 12:55:52.196: INFO: Got endpoints: latency-svc-2dd2v [4.328936412s]
Dec 20 12:55:52.539: INFO: Created: latency-svc-mxpdq
Dec 20 12:55:52.578: INFO: Got endpoints: latency-svc-mxpdq [4.455821523s]
Dec 20 12:55:52.751: INFO: Created: latency-svc-d2jwp
Dec 20 12:55:52.804: INFO: Got endpoints: latency-svc-d2jwp [4.446568131s]
Dec 20 12:55:53.008: INFO: Created: latency-svc-kb8v5
Dec 20 12:55:53.036: INFO: Got endpoints: latency-svc-kb8v5 [4.29575973s]
Dec 20 12:55:53.178: INFO: Created: latency-svc-8wbfq
Dec 20 12:55:53.195: INFO: Got endpoints: latency-svc-8wbfq [4.417791034s]
Dec 20 12:55:53.377: INFO: Created: latency-svc-d6gnw
Dec 20 12:55:53.399: INFO: Got endpoints: latency-svc-d6gnw [3.954549356s]
Dec 20 12:55:53.400: INFO: Latencies: [289.33701ms 303.203029ms 657.311355ms 963.316483ms 998.473471ms 1.504024106s 1.741467773s 1.772326545s 1.827311103s 1.842313977s 1.907669319s 1.931623681s 1.932223852s 1.946600968s 1.952270865s 1.952785539s 1.996660191s 2.001550608s 2.012384689s 2.02854294s 2.060789039s 2.061505605s 2.076777952s 2.093048667s 2.109577932s 2.137811773s 2.146226969s 2.147845201s 2.169241857s 2.18250691s 2.206681144s 2.220281539s 2.251401788s 2.260386473s 2.272738557s 2.280265932s 2.287455351s 2.299288261s 2.306573412s 2.311423874s 2.321965253s 2.332016696s 2.364072274s 2.369465614s 2.372909716s 2.416854904s 2.449829284s 2.470407355s 2.497201082s 2.511043504s 2.583713861s 2.600051494s 2.618918222s 2.649502814s 2.672170432s 2.693718921s 2.727594361s 2.730830618s 2.745089s 2.767622985s 2.783812228s 2.800247889s 2.815163613s 2.821555696s 2.870555483s 2.87871552s 2.901169666s 2.901878666s 2.90797676s 2.923921618s 2.933469333s 2.944835429s 2.957066012s 2.969679886s 2.973434382s 3.011227555s 3.017127156s 3.053360445s 3.059365304s 3.07525381s 3.075774587s 3.096001371s 3.100166987s 3.107442577s 3.114616423s 3.121805579s 3.127607745s 3.169444558s 3.172328137s 3.177373265s 3.18871502s 3.194733234s 3.222067722s 3.241620687s 3.256204052s 3.275170168s 3.295644947s 3.304360629s 3.307277065s 3.313366483s 3.360656413s 3.394747562s 3.48294646s 3.532928757s 3.545031258s 3.599657694s 3.611213384s 3.629068257s 3.643689153s 3.660798471s 3.667331829s 3.684040242s 3.692194804s 3.714688147s 3.718351782s 3.739937561s 3.748333271s 3.750880956s 3.76249876s 3.763198293s 3.765412634s 3.769147973s 3.896988494s 3.928954803s 3.954549356s 4.015743555s 4.040156444s 4.12034094s 4.17612806s 4.207029539s 4.270380374s 4.29575973s 4.320562294s 4.328936412s 4.336953918s 4.417791034s 4.446568131s 4.455821523s 4.638394508s 4.655700984s 4.694103106s 4.71898238s 4.732622281s 4.771083296s 4.866742748s 4.900837999s 4.985683629s 5.067954593s 5.199926094s 5.263597955s 5.298812654s 5.329243932s 5.330838908s 5.542812857s 5.545962751s 5.580624169s 5.58579507s 5.621242484s 5.632161191s 5.651492762s 5.658045697s 5.757296605s 5.845626783s 5.852848792s 5.886603895s 5.92848242s 6.027473317s 6.041229078s 6.130367502s 6.211895161s 6.247334448s 6.281089356s 6.309303038s 6.37506601s 6.400006233s 6.412684523s 6.466782334s 6.475301073s 6.647607739s 6.913351902s 7.007969766s 7.012854662s 7.118429288s 7.186270686s 7.259472903s 7.314158157s 7.326324715s 7.336084738s 7.458755476s 7.625802996s 7.699208738s 7.724815014s 7.81330721s 7.858504588s 7.918621876s 8.021944409s 8.064969518s 8.066583801s 8.09952077s 8.105675442s]
Dec 20 12:55:53.400: INFO: 50 %ile: 3.360656413s
Dec 20 12:55:53.400: INFO: 90 %ile: 7.007969766s
Dec 20 12:55:53.400: INFO: 99 %ile: 8.09952077s
Dec 20 12:55:53.400: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:55:53.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-28h6c" for this suite.
Dec 20 12:57:05.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:57:05.601: INFO: namespace: e2e-tests-svc-latency-28h6c, resource: bindings, ignored listing per whitelist
Dec 20 12:57:05.666: INFO: namespace e2e-tests-svc-latency-28h6c deletion completed in 1m12.252304987s

• [SLOW TEST:146.097 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:57:05.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:57:05.977: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 20 12:57:05.983: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-t2rtz/daemonsets","resourceVersion":"15461515"},"items":null}

Dec 20 12:57:05.986: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-t2rtz/pods","resourceVersion":"15461515"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:57:05.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-t2rtz" for this suite.
Dec 20 12:57:12.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:57:12.448: INFO: namespace: e2e-tests-daemonsets-t2rtz, resource: bindings, ignored listing per whitelist
Dec 20 12:57:12.570: INFO: namespace e2e-tests-daemonsets-t2rtz deletion completed in 6.575296039s

S [SKIPPING] [6.903 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 20 12:57:05.977: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:57:12.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 20 12:57:12.901: INFO: Waiting up to 5m0s for pod "pod-3dacf666-2328-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-gjbqd" to be "success or failure"
Dec 20 12:57:12.919: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.241493ms
Dec 20 12:57:15.016: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114675876s
Dec 20 12:57:17.493: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591169317s
Dec 20 12:57:19.517: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.61597374s
Dec 20 12:57:21.842: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.940588639s
Dec 20 12:57:23.875: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.973886451s
Dec 20 12:57:25.907: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.005735459s
Dec 20 12:57:27.933: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.031814789s
STEP: Saw pod success
Dec 20 12:57:27.933: INFO: Pod "pod-3dacf666-2328-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 12:57:29.914: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3dacf666-2328-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 12:57:30.717: INFO: Waiting for pod pod-3dacf666-2328-11ea-851f-0242ac110004 to disappear
Dec 20 12:57:30.750: INFO: Pod pod-3dacf666-2328-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:57:30.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gjbqd" for this suite.
Dec 20 12:57:38.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:57:39.112: INFO: namespace: e2e-tests-emptydir-gjbqd, resource: bindings, ignored listing per whitelist
Dec 20 12:57:39.123: INFO: namespace e2e-tests-emptydir-gjbqd deletion completed in 8.304993734s

• [SLOW TEST:26.552 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:57:39.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 20 12:57:54.793: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:57:56.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-58k8q" for this suite.
Dec 20 12:58:17.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:58:17.645: INFO: namespace: e2e-tests-replicaset-58k8q, resource: bindings, ignored listing per whitelist
Dec 20 12:58:17.738: INFO: namespace e2e-tests-replicaset-58k8q deletion completed in 21.665251739s

• [SLOW TEST:38.614 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:58:17.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 20 12:58:30.188: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-648178fb-2328-11ea-851f-0242ac110004", GenerateName:"", Namespace:"e2e-tests-pods-2ggrv", SelfLink:"/api/v1/namespaces/e2e-tests-pods-2ggrv/pods/pod-submit-remove-648178fb-2328-11ea-851f-0242ac110004", UID:"6484ac2a-2328-11ea-a994-fa163e34d433", ResourceVersion:"15461697", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712443498, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"9258350"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ww6f4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000998280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ww6f4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00101cba8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ff67e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00101cbe0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00101cc10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00101cc18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00101cc1c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443498, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443508, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443508, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712443498, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001452e40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001452e80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://14dd3db0e82fa6fa14c135ab97f954a3a40a7739db32b60d26cce129ad67190b"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:58:39.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2ggrv" for this suite.
Dec 20 12:58:45.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:58:45.524: INFO: namespace: e2e-tests-pods-2ggrv, resource: bindings, ignored listing per whitelist
Dec 20 12:58:45.597: INFO: namespace e2e-tests-pods-2ggrv deletion completed in 6.322347426s

• [SLOW TEST:27.859 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:58:45.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 12:58:46.007: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 20 12:58:51.733: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 20 12:58:59.769: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 20 12:59:00.116: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-rbrx6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rbrx6/deployments/test-cleanup-deployment,UID:7d69e8d7-2328-11ea-a994-fa163e34d433,ResourceVersion:15461765,Generation:1,CreationTimestamp:2019-12-20 12:58:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 20 12:59:00.146: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Dec 20 12:59:00.146: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 20 12:59:00.147: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-rbrx6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rbrx6/replicasets/test-cleanup-controller,UID:751ba7f3-2328-11ea-a994-fa163e34d433,ResourceVersion:15461766,Generation:1,CreationTimestamp:2019-12-20 12:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7d69e8d7-2328-11ea-a994-fa163e34d433 0xc002460357 0xc002460358}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 20 12:59:00.462: INFO: Pod "test-cleanup-controller-rs55s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rs55s,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-rbrx6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rbrx6/pods/test-cleanup-controller-rs55s,UID:7532d6ec-2328-11ea-a994-fa163e34d433,ResourceVersion:15461761,Generation:0,CreationTimestamp:2019-12-20 12:58:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 751ba7f3-2328-11ea-a994-fa163e34d433 0xc0024608f7 0xc0024608f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pw6ln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pw6ln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pw6ln true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002460960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002460980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:58:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:58:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:58:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-20 12:58:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-20 12:58:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-20 12:58:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bb121b6984255b2bc0e4d8db6b811995dbf652cd281f1573e284fc451b9817f4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:59:00.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rbrx6" for this suite.
Dec 20 12:59:11.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:59:11.872: INFO: namespace: e2e-tests-deployment-rbrx6, resource: bindings, ignored listing per whitelist
Dec 20 12:59:12.078: INFO: namespace e2e-tests-deployment-rbrx6 deletion completed in 11.529462814s

• [SLOW TEST:26.481 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:59:12.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 20 12:59:22.867: INFO: Successfully updated pod "pod-update-84da931f-2328-11ea-851f-0242ac110004"
STEP: verifying the updated pod is in kubernetes
Dec 20 12:59:22.906: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 12:59:22.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lssnh" for this suite.
Dec 20 12:59:46.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 12:59:47.037: INFO: namespace: e2e-tests-pods-lssnh, resource: bindings, ignored listing per whitelist
Dec 20 12:59:47.091: INFO: namespace e2e-tests-pods-lssnh deletion completed in 24.179533089s

• [SLOW TEST:35.012 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 12:59:47.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vqdss
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 20 12:59:47.225: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 20 13:00:23.663: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-vqdss PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 13:00:23.663: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 13:00:24.306: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:00:24.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-vqdss" for this suite.
Dec 20 13:00:48.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:00:48.493: INFO: namespace: e2e-tests-pod-network-test-vqdss, resource: bindings, ignored listing per whitelist
Dec 20 13:00:48.700: INFO: namespace e2e-tests-pod-network-test-vqdss deletion completed in 24.370028585s

• [SLOW TEST:61.608 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:00:48.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-be74f882-2328-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 13:00:48.955: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-tsnbs" to be "success or failure"
Dec 20 13:00:49.019: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 63.675521ms
Dec 20 13:00:51.035: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079539971s
Dec 20 13:00:53.056: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100502778s
Dec 20 13:00:55.661: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.70568451s
Dec 20 13:00:58.245: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.289219684s
Dec 20 13:01:00.847: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 11.891400161s
Dec 20 13:01:02.868: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 13.912030886s
Dec 20 13:01:04.912: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.956890194s
STEP: Saw pod success
Dec 20 13:01:04.913: INFO: Pod "pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:01:04.927: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 20 13:01:05.523: INFO: Waiting for pod pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004 to disappear
Dec 20 13:01:06.398: INFO: Pod pod-projected-configmaps-be76f81e-2328-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:01:06.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tsnbs" for this suite.
Dec 20 13:01:12.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:01:12.851: INFO: namespace: e2e-tests-projected-tsnbs, resource: bindings, ignored listing per whitelist
Dec 20 13:01:12.866: INFO: namespace e2e-tests-projected-tsnbs deletion completed in 6.453608201s

• [SLOW TEST:24.166 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:01:12.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:01:23.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-xmmjv" for this suite.
Dec 20 13:02:17.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:02:17.206: INFO: namespace: e2e-tests-kubelet-test-xmmjv, resource: bindings, ignored listing per whitelist
Dec 20 13:02:17.298: INFO: namespace e2e-tests-kubelet-test-xmmjv deletion completed in 54.205331899s

• [SLOW TEST:64.432 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:02:17.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 20 13:02:17.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sqmx9'
Dec 20 13:02:19.355: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 20 13:02:19.355: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 20 13:02:19.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-sqmx9'
Dec 20 13:02:19.636: INFO: stderr: ""
Dec 20 13:02:19.636: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:02:19.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sqmx9" for this suite.
Dec 20 13:02:26.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:02:26.100: INFO: namespace: e2e-tests-kubectl-sqmx9, resource: bindings, ignored listing per whitelist
Dec 20 13:02:26.799: INFO: namespace e2e-tests-kubectl-sqmx9 deletion completed in 7.152034951s

• [SLOW TEST:9.500 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:02:26.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 20 13:02:27.025: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-gfb22" to be "success or failure"
Dec 20 13:02:27.033: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.46442ms
Dec 20 13:02:29.046: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021150978s
Dec 20 13:02:31.067: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041298578s
Dec 20 13:02:33.165: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139487045s
Dec 20 13:02:35.211: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186168676s
Dec 20 13:02:37.325: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.299280082s
Dec 20 13:02:39.343: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.317625062s
Dec 20 13:02:41.362: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.336663624s
Dec 20 13:02:43.391: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.365361726s
STEP: Saw pod success
Dec 20 13:02:43.391: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 20 13:02:43.401: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 20 13:02:44.638: INFO: Waiting for pod pod-host-path-test to disappear
Dec 20 13:02:44.654: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:02:44.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-gfb22" for this suite.
Dec 20 13:02:50.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:02:51.030: INFO: namespace: e2e-tests-hostpath-gfb22, resource: bindings, ignored listing per whitelist
Dec 20 13:02:51.266: INFO: namespace e2e-tests-hostpath-gfb22 deletion completed in 6.599763281s

• [SLOW TEST:24.466 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:02:51.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-07a4792a-2329-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 13:02:51.757: INFO: Waiting up to 5m0s for pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-g8jm8" to be "success or failure"
Dec 20 13:02:51.783: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.019114ms
Dec 20 13:02:54.796: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.03865624s
Dec 20 13:02:56.814: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.05696827s
Dec 20 13:02:58.846: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.088670102s
Dec 20 13:03:03.266: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.508708663s
Dec 20 13:03:06.103: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.346230833s
Dec 20 13:03:08.119: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.361712443s
Dec 20 13:03:10.144: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.386446095s
STEP: Saw pod success
Dec 20 13:03:10.144: INFO: Pod "pod-secrets-07a67217-2329-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:03:10.153: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-07a67217-2329-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 13:03:10.602: INFO: Waiting for pod pod-secrets-07a67217-2329-11ea-851f-0242ac110004 to disappear
Dec 20 13:03:10.983: INFO: Pod pod-secrets-07a67217-2329-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:03:10.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-g8jm8" for this suite.
Dec 20 13:03:17.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:03:17.345: INFO: namespace: e2e-tests-secrets-g8jm8, resource: bindings, ignored listing per whitelist
Dec 20 13:03:17.345: INFO: namespace e2e-tests-secrets-g8jm8 deletion completed in 6.348327061s

• [SLOW TEST:26.079 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:03:17.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nbwcs
Dec 20 13:03:29.874: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nbwcs
STEP: checking the pod's current state and verifying that restartCount is present
Dec 20 13:03:29.970: INFO: Initial restart count of pod liveness-http is 0
Dec 20 13:03:44.661: INFO: Restart count of pod e2e-tests-container-probe-nbwcs/liveness-http is now 1 (14.69136102s elapsed)
Dec 20 13:04:07.092: INFO: Restart count of pod e2e-tests-container-probe-nbwcs/liveness-http is now 2 (37.122168299s elapsed)
Dec 20 13:04:23.235: INFO: Restart count of pod e2e-tests-container-probe-nbwcs/liveness-http is now 3 (53.26494792s elapsed)
Dec 20 13:04:45.712: INFO: Restart count of pod e2e-tests-container-probe-nbwcs/liveness-http is now 4 (1m15.741787119s elapsed)
Dec 20 13:05:54.515: INFO: Restart count of pod e2e-tests-container-probe-nbwcs/liveness-http is now 5 (2m24.544716798s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:05:54.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nbwcs" for this suite.
Dec 20 13:06:02.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:06:03.074: INFO: namespace: e2e-tests-container-probe-nbwcs, resource: bindings, ignored listing per whitelist
Dec 20 13:06:03.243: INFO: namespace e2e-tests-container-probe-nbwcs deletion completed in 8.584240756s

• [SLOW TEST:165.898 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:06:03.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 20 13:06:03.648: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-4zkxb" to be "success or failure"
Dec 20 13:06:03.797: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 148.473646ms
Dec 20 13:06:05.980: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332102878s
Dec 20 13:06:08.005: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356848688s
Dec 20 13:06:10.791: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.14238946s
Dec 20 13:06:12.799: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.151203767s
Dec 20 13:06:14.820: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.172199967s
Dec 20 13:06:19.418: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.769860953s
Dec 20 13:06:21.452: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.803445219s
STEP: Saw pod success
Dec 20 13:06:21.452: INFO: Pod "downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:06:21.465: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004 container client-container: 
STEP: delete the pod
Dec 20 13:06:22.689: INFO: Waiting for pod downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004 to disappear
Dec 20 13:06:22.703: INFO: Pod downwardapi-volume-7a07b591-2329-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:06:22.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4zkxb" for this suite.
Dec 20 13:06:29.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:06:29.177: INFO: namespace: e2e-tests-projected-4zkxb, resource: bindings, ignored listing per whitelist
Dec 20 13:06:29.208: INFO: namespace e2e-tests-projected-4zkxb deletion completed in 6.232876482s

• [SLOW TEST:25.964 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:06:29.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 20 13:06:29.464: INFO: Waiting up to 5m0s for pod "downward-api-8961863c-2329-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-xkn2s" to be "success or failure"
Dec 20 13:06:29.481: INFO: Pod "downward-api-8961863c-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.612068ms
Dec 20 13:06:31.499: INFO: Pod "downward-api-8961863c-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034407004s
Dec 20 13:06:33.524: INFO: Pod "downward-api-8961863c-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060028968s
Dec 20 13:06:36.764: INFO: Pod "downward-api-8961863c-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.300013529s
Dec 20 13:06:39.522: INFO: Pod "downward-api-8961863c-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05760824s
Dec 20 13:06:41.742: INFO: Pod "downward-api-8961863c-2329-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.27740989s
STEP: Saw pod success
Dec 20 13:06:41.742: INFO: Pod "downward-api-8961863c-2329-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:06:42.238: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8961863c-2329-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 13:06:42.687: INFO: Waiting for pod downward-api-8961863c-2329-11ea-851f-0242ac110004 to disappear
Dec 20 13:06:42.713: INFO: Pod downward-api-8961863c-2329-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:06:42.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xkn2s" for this suite.
Dec 20 13:06:50.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:06:50.855: INFO: namespace: e2e-tests-downward-api-xkn2s, resource: bindings, ignored listing per whitelist
Dec 20 13:06:50.924: INFO: namespace e2e-tests-downward-api-xkn2s deletion completed in 8.199900137s

• [SLOW TEST:21.716 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:06:50.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 20 13:06:51.260: INFO: Waiting up to 5m0s for pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004" in namespace "e2e-tests-downward-api-gnf9b" to be "success or failure"
Dec 20 13:06:51.278: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.058124ms
Dec 20 13:06:53.916: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.655283307s
Dec 20 13:06:55.972: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.711948203s
Dec 20 13:06:58.735: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.475116622s
Dec 20 13:07:01.163: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.902316911s
Dec 20 13:07:03.190: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.929994313s
Dec 20 13:07:05.571: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.311005058s
STEP: Saw pod success
Dec 20 13:07:05.572: INFO: Pod "downward-api-965d76a7-2329-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:07:06.032: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-965d76a7-2329-11ea-851f-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 20 13:07:06.446: INFO: Waiting for pod downward-api-965d76a7-2329-11ea-851f-0242ac110004 to disappear
Dec 20 13:07:06.598: INFO: Pod downward-api-965d76a7-2329-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:07:06.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gnf9b" for this suite.
Dec 20 13:07:13.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:07:13.463: INFO: namespace: e2e-tests-downward-api-gnf9b, resource: bindings, ignored listing per whitelist
Dec 20 13:07:13.537: INFO: namespace e2e-tests-downward-api-gnf9b deletion completed in 6.928727252s

• [SLOW TEST:22.613 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:07:13.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-942mc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 20 13:07:13.934: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 20 13:07:58.766: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-942mc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 13:07:58.767: INFO: >>> kubeConfig: /root/.kube/config
Dec 20 13:08:00.332: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:08:00.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-942mc" for this suite.
Dec 20 13:08:18.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:08:18.677: INFO: namespace: e2e-tests-pod-network-test-942mc, resource: bindings, ignored listing per whitelist
Dec 20 13:08:18.686: INFO: namespace e2e-tests-pod-network-test-942mc deletion completed in 18.323804507s

• [SLOW TEST:65.149 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:08:18.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 20 13:08:19.234: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:08:39.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-r4f4s" for this suite.
Dec 20 13:08:47.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:08:47.786: INFO: namespace: e2e-tests-init-container-r4f4s, resource: bindings, ignored listing per whitelist
Dec 20 13:08:47.909: INFO: namespace e2e-tests-init-container-r4f4s deletion completed in 8.376367129s

• [SLOW TEST:29.223 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:08:47.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-dc1afba6-2329-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 13:08:48.181: INFO: Waiting up to 5m0s for pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004" in namespace "e2e-tests-secrets-nbghv" to be "success or failure"
Dec 20 13:08:48.208: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.529977ms
Dec 20 13:08:50.283: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101253407s
Dec 20 13:08:52.378: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196868274s
Dec 20 13:08:54.393: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211902293s
Dec 20 13:08:57.621: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.439897505s
Dec 20 13:09:01.074: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.892623132s
Dec 20 13:09:03.111: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.93011755s
Dec 20 13:09:05.153: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.972079215s
Dec 20 13:09:07.507: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.325831195s
Dec 20 13:09:09.881: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.699603889s
STEP: Saw pod success
Dec 20 13:09:09.881: INFO: Pod "pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:09:09.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 20 13:09:10.621: INFO: Waiting for pod pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004 to disappear
Dec 20 13:09:10.648: INFO: Pod pod-secrets-dc1c5d43-2329-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:09:10.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nbghv" for this suite.
Dec 20 13:09:18.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:09:18.973: INFO: namespace: e2e-tests-secrets-nbghv, resource: bindings, ignored listing per whitelist
Dec 20 13:09:19.045: INFO: namespace e2e-tests-secrets-nbghv deletion completed in 8.368276139s

• [SLOW TEST:31.135 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:09:19.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-qqm8j/configmap-test-eea1266a-2329-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 13:09:19.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-qqm8j" to be "success or failure"
Dec 20 13:09:19.425: INFO: Pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622647ms
Dec 20 13:09:21.442: INFO: Pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025947411s
Dec 20 13:09:23.470: INFO: Pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053891881s
Dec 20 13:09:25.558: INFO: Pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14160473s
Dec 20 13:09:27.598: INFO: Pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182118726s
Dec 20 13:09:29.636: INFO: Pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.219318046s
STEP: Saw pod success
Dec 20 13:09:29.636: INFO: Pod "pod-configmaps-eea30925-2329-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:09:29.675: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-eea30925-2329-11ea-851f-0242ac110004 container env-test: 
STEP: delete the pod
Dec 20 13:09:30.086: INFO: Waiting for pod pod-configmaps-eea30925-2329-11ea-851f-0242ac110004 to disappear
Dec 20 13:09:30.134: INFO: Pod pod-configmaps-eea30925-2329-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:09:30.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qqm8j" for this suite.
Dec 20 13:09:36.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:09:36.337: INFO: namespace: e2e-tests-configmap-qqm8j, resource: bindings, ignored listing per whitelist
Dec 20 13:09:36.442: INFO: namespace e2e-tests-configmap-qqm8j deletion completed in 6.243519602s

• [SLOW TEST:17.398 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:09:36.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-f91cad1f-2329-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 13:09:37.110: INFO: Waiting up to 5m0s for pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-855vj" to be "success or failure"
Dec 20 13:09:37.197: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 86.796813ms
Dec 20 13:09:39.213: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10269194s
Dec 20 13:09:41.246: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136117001s
Dec 20 13:09:43.262: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152161378s
Dec 20 13:09:45.509: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398718536s
Dec 20 13:09:47.519: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.409056095s
Dec 20 13:09:49.592: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.481924696s
STEP: Saw pod success
Dec 20 13:09:49.592: INFO: Pod "pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:09:49.600: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 20 13:09:49.995: INFO: Waiting for pod pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004 to disappear
Dec 20 13:09:50.012: INFO: Pod pod-configmaps-f91ec383-2329-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:09:50.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-855vj" for this suite.
Dec 20 13:09:58.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:09:58.739: INFO: namespace: e2e-tests-configmap-855vj, resource: bindings, ignored listing per whitelist
Dec 20 13:09:58.908: INFO: namespace e2e-tests-configmap-855vj deletion completed in 8.642427271s

• [SLOW TEST:22.465 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:09:58.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 20 13:09:59.382: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 20 13:09:59.506: INFO: Number of nodes with available pods: 0
Dec 20 13:09:59.506: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:00.645: INFO: Number of nodes with available pods: 0
Dec 20 13:10:00.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:01.659: INFO: Number of nodes with available pods: 0
Dec 20 13:10:01.659: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:02.642: INFO: Number of nodes with available pods: 0
Dec 20 13:10:02.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:03.525: INFO: Number of nodes with available pods: 0
Dec 20 13:10:03.525: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:04.561: INFO: Number of nodes with available pods: 0
Dec 20 13:10:04.561: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:05.680: INFO: Number of nodes with available pods: 0
Dec 20 13:10:05.681: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:07.365: INFO: Number of nodes with available pods: 0
Dec 20 13:10:07.365: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:08.351: INFO: Number of nodes with available pods: 0
Dec 20 13:10:08.351: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:08.725: INFO: Number of nodes with available pods: 0
Dec 20 13:10:08.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:09.895: INFO: Number of nodes with available pods: 0
Dec 20 13:10:09.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:10.569: INFO: Number of nodes with available pods: 0
Dec 20 13:10:10.569: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:11.592: INFO: Number of nodes with available pods: 1
Dec 20 13:10:11.592: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 20 13:10:11.832: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:12.911: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:13.918: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:15.719: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:16.051: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:18.051: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:19.001: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:19.918: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:21.025: INFO: Wrong image for pod: daemon-set-6kq88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 20 13:10:21.025: INFO: Pod daemon-set-6kq88 is not available
Dec 20 13:10:22.069: INFO: Pod daemon-set-6r4h5 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 20 13:10:22.300: INFO: Number of nodes with available pods: 0
Dec 20 13:10:22.300: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:23.935: INFO: Number of nodes with available pods: 0
Dec 20 13:10:23.935: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:24.340: INFO: Number of nodes with available pods: 0
Dec 20 13:10:24.341: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:25.332: INFO: Number of nodes with available pods: 0
Dec 20 13:10:25.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:26.344: INFO: Number of nodes with available pods: 0
Dec 20 13:10:26.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:27.369: INFO: Number of nodes with available pods: 0
Dec 20 13:10:27.369: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:28.320: INFO: Number of nodes with available pods: 0
Dec 20 13:10:28.320: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:29.336: INFO: Number of nodes with available pods: 0
Dec 20 13:10:29.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:30.316: INFO: Number of nodes with available pods: 0
Dec 20 13:10:30.316: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:32.815: INFO: Number of nodes with available pods: 0
Dec 20 13:10:32.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:33.617: INFO: Number of nodes with available pods: 0
Dec 20 13:10:33.617: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:34.336: INFO: Number of nodes with available pods: 0
Dec 20 13:10:34.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:35.323: INFO: Number of nodes with available pods: 0
Dec 20 13:10:35.323: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:36.360: INFO: Number of nodes with available pods: 0
Dec 20 13:10:36.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 20 13:10:37.382: INFO: Number of nodes with available pods: 1
Dec 20 13:10:37.383: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nwb8d, will wait for the garbage collector to delete the pods
Dec 20 13:10:37.721: INFO: Deleting DaemonSet.extensions daemon-set took: 67.424857ms
Dec 20 13:10:37.822: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.421037ms
Dec 20 13:10:47.341: INFO: Number of nodes with available pods: 0
Dec 20 13:10:47.341: INFO: Number of running nodes: 0, number of available pods: 0
Dec 20 13:10:47.386: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nwb8d/daemonsets","resourceVersion":"15463137"},"items":null}

Dec 20 13:10:47.392: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nwb8d/pods","resourceVersion":"15463137"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:10:47.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nwb8d" for this suite.
Dec 20 13:10:55.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:10:55.716: INFO: namespace: e2e-tests-daemonsets-nwb8d, resource: bindings, ignored listing per whitelist
Dec 20 13:10:55.729: INFO: namespace e2e-tests-daemonsets-nwb8d deletion completed in 8.299021523s

• [SLOW TEST:56.821 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:10:55.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 20 13:10:55.925: INFO: Waiting up to 5m0s for pod "client-containers-2840d516-232a-11ea-851f-0242ac110004" in namespace "e2e-tests-containers-bspxq" to be "success or failure"
Dec 20 13:10:55.933: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068695ms
Dec 20 13:10:58.015: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089756415s
Dec 20 13:11:00.036: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110939464s
Dec 20 13:11:02.373: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448562244s
Dec 20 13:11:04.411: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48647565s
Dec 20 13:11:06.448: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.52299945s
Dec 20 13:11:08.601: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.676023414s
STEP: Saw pod success
Dec 20 13:11:08.601: INFO: Pod "client-containers-2840d516-232a-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:11:08.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-2840d516-232a-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 13:11:09.520: INFO: Waiting for pod client-containers-2840d516-232a-11ea-851f-0242ac110004 to disappear
Dec 20 13:11:09.525: INFO: Pod client-containers-2840d516-232a-11ea-851f-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:11:09.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-bspxq" for this suite.
Dec 20 13:11:15.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:11:15.633: INFO: namespace: e2e-tests-containers-bspxq, resource: bindings, ignored listing per whitelist
Dec 20 13:11:15.839: INFO: namespace e2e-tests-containers-bspxq deletion completed in 6.309566655s

• [SLOW TEST:20.110 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:11:15.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 20 13:11:16.135: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:11:18.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-fwdq9" for this suite.
Dec 20 13:11:27.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:11:28.062: INFO: namespace: e2e-tests-replication-controller-fwdq9, resource: bindings, ignored listing per whitelist
Dec 20 13:11:28.114: INFO: namespace e2e-tests-replication-controller-fwdq9 deletion completed in 9.228975191s

• [SLOW TEST:12.274 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:11:28.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 20 13:11:32.577: INFO: Waiting up to 5m0s for pod "pod-3dbda555-232a-11ea-851f-0242ac110004" in namespace "e2e-tests-emptydir-k8275" to be "success or failure"
Dec 20 13:11:32.896: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 318.478861ms
Dec 20 13:11:37.134: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556732123s
Dec 20 13:11:39.147: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.56904183s
Dec 20 13:11:42.146: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.568973774s
Dec 20 13:11:44.171: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.593494798s
Dec 20 13:11:46.267: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.689435889s
Dec 20 13:11:49.087: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.509666243s
Dec 20 13:11:51.115: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.537054923s
STEP: Saw pod success
Dec 20 13:11:51.115: INFO: Pod "pod-3dbda555-232a-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:11:51.119: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3dbda555-232a-11ea-851f-0242ac110004 container test-container: 
STEP: delete the pod
Dec 20 13:11:53.699: INFO: Waiting for pod pod-3dbda555-232a-11ea-851f-0242ac110004 to disappear
Dec 20 13:11:54.011: INFO: Pod pod-3dbda555-232a-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:11:54.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k8275" for this suite.
Dec 20 13:12:02.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:12:02.246: INFO: namespace: e2e-tests-emptydir-k8275, resource: bindings, ignored listing per whitelist
Dec 20 13:12:02.500: INFO: namespace e2e-tests-emptydir-k8275 deletion completed in 8.475687196s

• [SLOW TEST:34.385 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:12:02.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-50546213-232a-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 13:12:03.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-hnkm7" to be "success or failure"
Dec 20 13:12:03.368: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 124.852128ms
Dec 20 13:12:06.024: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.78120256s
Dec 20 13:12:08.044: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.801029697s
Dec 20 13:12:10.051: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.808087498s
Dec 20 13:12:13.539: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.295957499s
Dec 20 13:12:15.570: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.327448643s
Dec 20 13:12:17.588: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.34473866s
Dec 20 13:12:19.623: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.380456318s
STEP: Saw pod success
Dec 20 13:12:19.624: INFO: Pod "pod-configmaps-505dda42-232a-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:12:19.668: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-505dda42-232a-11ea-851f-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 20 13:12:20.138: INFO: Waiting for pod pod-configmaps-505dda42-232a-11ea-851f-0242ac110004 to disappear
Dec 20 13:12:20.156: INFO: Pod pod-configmaps-505dda42-232a-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:12:20.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hnkm7" for this suite.
Dec 20 13:12:26.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:12:26.433: INFO: namespace: e2e-tests-configmap-hnkm7, resource: bindings, ignored listing per whitelist
Dec 20 13:12:26.674: INFO: namespace e2e-tests-configmap-hnkm7 deletion completed in 6.510157461s

• [SLOW TEST:24.173 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:12:26.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-5e941bbe-232a-11ea-851f-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 20 13:12:27.071: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004" in namespace "e2e-tests-configmap-jdwsh" to be "success or failure"
Dec 20 13:12:27.094: INFO: Pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.342056ms
Dec 20 13:12:29.123: INFO: Pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051301121s
Dec 20 13:12:31.160: INFO: Pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088495041s
Dec 20 13:12:33.947: INFO: Pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.875599051s
Dec 20 13:12:35.976: INFO: Pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.904611378s
Dec 20 13:12:37.998: INFO: Pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.926462428s
STEP: Saw pod success
Dec 20 13:12:37.998: INFO: Pod "pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:12:38.037: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 20 13:12:39.093: INFO: Waiting for pod pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004 to disappear
Dec 20 13:12:39.111: INFO: Pod pod-configmaps-5e94bf81-232a-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:12:39.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jdwsh" for this suite.
Dec 20 13:12:45.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:12:45.601: INFO: namespace: e2e-tests-configmap-jdwsh, resource: bindings, ignored listing per whitelist
Dec 20 13:12:45.760: INFO: namespace e2e-tests-configmap-jdwsh deletion completed in 6.638078348s

• [SLOW TEST:19.086 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:12:45.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 20 13:12:46.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:12:48.464: INFO: stderr: ""
Dec 20 13:12:48.464: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 20 13:12:48.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:12:48.755: INFO: stderr: ""
Dec 20 13:12:48.755: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-bz4gf "
Dec 20 13:12:48.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:12:49.024: INFO: stderr: ""
Dec 20 13:12:49.025: INFO: stdout: ""
Dec 20 13:12:49.025: INFO: update-demo-nautilus-6wvvx is created but not running
Dec 20 13:12:54.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:12:54.130: INFO: stderr: ""
Dec 20 13:12:54.130: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-bz4gf "
Dec 20 13:12:54.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:12:55.680: INFO: stderr: ""
Dec 20 13:12:55.680: INFO: stdout: ""
Dec 20 13:12:55.680: INFO: update-demo-nautilus-6wvvx is created but not running
Dec 20 13:13:00.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:05.453: INFO: stderr: ""
Dec 20 13:13:05.453: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-bz4gf "
Dec 20 13:13:05.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:06.050: INFO: stderr: ""
Dec 20 13:13:06.050: INFO: stdout: ""
Dec 20 13:13:06.050: INFO: update-demo-nautilus-6wvvx is created but not running
Dec 20 13:13:11.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:11.254: INFO: stderr: ""
Dec 20 13:13:11.255: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-bz4gf "
Dec 20 13:13:11.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:11.387: INFO: stderr: ""
Dec 20 13:13:11.387: INFO: stdout: "true"
Dec 20 13:13:11.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:11.500: INFO: stderr: ""
Dec 20 13:13:11.500: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 13:13:11.500: INFO: validating pod update-demo-nautilus-6wvvx
Dec 20 13:13:11.531: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 13:13:11.531: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 13:13:11.531: INFO: update-demo-nautilus-6wvvx is verified up and running
Dec 20 13:13:11.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bz4gf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:11.637: INFO: stderr: ""
Dec 20 13:13:11.637: INFO: stdout: "true"
Dec 20 13:13:11.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bz4gf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:11.738: INFO: stderr: ""
Dec 20 13:13:11.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 13:13:11.739: INFO: validating pod update-demo-nautilus-bz4gf
Dec 20 13:13:11.748: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 13:13:11.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 13:13:11.748: INFO: update-demo-nautilus-bz4gf is verified up and running
STEP: scaling down the replication controller
Dec 20 13:13:11.750: INFO: scanned /root for discovery docs: 
Dec 20 13:13:11.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:13.100: INFO: stderr: ""
Dec 20 13:13:13.100: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 20 13:13:13.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:13.278: INFO: stderr: ""
Dec 20 13:13:13.278: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-bz4gf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 20 13:13:18.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:18.476: INFO: stderr: ""
Dec 20 13:13:18.476: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-bz4gf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 20 13:13:23.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:23.693: INFO: stderr: ""
Dec 20 13:13:23.694: INFO: stdout: "update-demo-nautilus-6wvvx "
Dec 20 13:13:23.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:23.834: INFO: stderr: ""
Dec 20 13:13:23.834: INFO: stdout: "true"
Dec 20 13:13:23.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:24.061: INFO: stderr: ""
Dec 20 13:13:24.062: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 13:13:24.062: INFO: validating pod update-demo-nautilus-6wvvx
Dec 20 13:13:24.073: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 13:13:24.073: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 13:13:24.073: INFO: update-demo-nautilus-6wvvx is verified up and running
STEP: scaling up the replication controller
Dec 20 13:13:24.076: INFO: scanned /root for discovery docs: 
Dec 20 13:13:24.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:25.337: INFO: stderr: ""
Dec 20 13:13:25.337: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 20 13:13:25.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:25.469: INFO: stderr: ""
Dec 20 13:13:25.469: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-pczfp "
Dec 20 13:13:25.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:25.583: INFO: stderr: ""
Dec 20 13:13:25.584: INFO: stdout: "true"
Dec 20 13:13:25.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:25.738: INFO: stderr: ""
Dec 20 13:13:25.739: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 13:13:25.739: INFO: validating pod update-demo-nautilus-6wvvx
Dec 20 13:13:25.770: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 13:13:25.770: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 13:13:25.770: INFO: update-demo-nautilus-6wvvx is verified up and running
Dec 20 13:13:25.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pczfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:25.930: INFO: stderr: ""
Dec 20 13:13:25.930: INFO: stdout: ""
Dec 20 13:13:25.930: INFO: update-demo-nautilus-pczfp is created but not running
Dec 20 13:13:30.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:31.479: INFO: stderr: ""
Dec 20 13:13:31.479: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-pczfp "
Dec 20 13:13:31.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:32.891: INFO: stderr: ""
Dec 20 13:13:32.891: INFO: stdout: "true"
Dec 20 13:13:32.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:33.760: INFO: stderr: ""
Dec 20 13:13:33.760: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 13:13:33.760: INFO: validating pod update-demo-nautilus-6wvvx
Dec 20 13:13:33.797: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 13:13:33.798: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 13:13:33.798: INFO: update-demo-nautilus-6wvvx is verified up and running
Dec 20 13:13:33.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pczfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:34.006: INFO: stderr: ""
Dec 20 13:13:34.007: INFO: stdout: ""
Dec 20 13:13:34.007: INFO: update-demo-nautilus-pczfp is created but not running
Dec 20 13:13:39.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:39.186: INFO: stderr: ""
Dec 20 13:13:39.186: INFO: stdout: "update-demo-nautilus-6wvvx update-demo-nautilus-pczfp "
Dec 20 13:13:39.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:39.379: INFO: stderr: ""
Dec 20 13:13:39.379: INFO: stdout: "true"
Dec 20 13:13:39.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:39.489: INFO: stderr: ""
Dec 20 13:13:39.490: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 13:13:39.490: INFO: validating pod update-demo-nautilus-6wvvx
Dec 20 13:13:39.498: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 13:13:39.498: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 13:13:39.498: INFO: update-demo-nautilus-6wvvx is verified up and running
Dec 20 13:13:39.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pczfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:39.600: INFO: stderr: ""
Dec 20 13:13:39.600: INFO: stdout: "true"
Dec 20 13:13:39.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pczfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:39.720: INFO: stderr: ""
Dec 20 13:13:39.720: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 20 13:13:39.720: INFO: validating pod update-demo-nautilus-pczfp
Dec 20 13:13:39.735: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 20 13:13:39.735: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 20 13:13:39.735: INFO: update-demo-nautilus-pczfp is verified up and running
STEP: using delete to clean up resources
Dec 20 13:13:39.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:39.890: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 20 13:13:39.890: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 20 13:13:39.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-jnxpf'
Dec 20 13:13:40.178: INFO: stderr: "No resources found.\n"
Dec 20 13:13:40.178: INFO: stdout: ""
Dec 20 13:13:40.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-jnxpf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 20 13:13:40.342: INFO: stderr: ""
Dec 20 13:13:40.342: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:13:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jnxpf" for this suite.
Dec 20 13:14:06.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:14:06.542: INFO: namespace: e2e-tests-kubectl-jnxpf, resource: bindings, ignored listing per whitelist
Dec 20 13:14:06.644: INFO: namespace e2e-tests-kubectl-jnxpf deletion completed in 26.285478672s

• [SLOW TEST:80.883 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 20 13:14:06.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-9a30847f-232a-11ea-851f-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 20 13:14:07.091: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004" in namespace "e2e-tests-projected-2ffdk" to be "success or failure"
Dec 20 13:14:07.140: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 48.792749ms
Dec 20 13:14:09.150: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059022511s
Dec 20 13:14:11.161: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069490263s
Dec 20 13:14:13.194: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103052528s
Dec 20 13:14:16.622: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.530564609s
Dec 20 13:14:18.642: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.550923486s
Dec 20 13:14:20.661: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.569369882s
STEP: Saw pod success
Dec 20 13:14:20.661: INFO: Pod "pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004" satisfied condition "success or failure"
Dec 20 13:14:20.673: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 20 13:14:20.799: INFO: Waiting for pod pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004 to disappear
Dec 20 13:14:20.840: INFO: Pod pod-projected-secrets-9a31e885-232a-11ea-851f-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 20 13:14:20.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2ffdk" for this suite.
Dec 20 13:14:29.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 13:14:29.177: INFO: namespace: e2e-tests-projected-2ffdk, resource: bindings, ignored listing per whitelist
Dec 20 13:14:29.184: INFO: namespace e2e-tests-projected-2ffdk deletion completed in 8.329856324s

• [SLOW TEST:22.539 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SDec 20 13:14:29.184: INFO: Running AfterSuite actions on all nodes
Dec 20 13:14:29.184: INFO: Running AfterSuite actions on node 1
Dec 20 13:14:29.184: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8834.039 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS